Kappa index of agreement

This tool calculates the Kappa index of agreement (KIA) for two classified raster images. The user must specify the names of the two input images and the output html file. The output file will be displayed automatically but can also be displayed in afterwards in any web browser.. The input images must be of a categorical data type, i.e. contain classes. As a measure of overall classification accuracy, the KIA is more robust than the percent agreement calculation since it takes into account the agreement occurring by random chance. In addition to the KIA, the tool will output the producer's and user's accuracy, the overall accuracy, and the error matrix. The KIA is often used as a means of assessing the accuracy of an image classification analysis.

See Also:

Scripting:

The following is an example of a Python script that uses this tool:

wd = pluginHost.getWorkingDirectory()
classificationImage = wd + "class.dep"
referenceImage = wd + "reference.dep"
outputFile = wd + "kappa_analysis.html"
args = [classificationImage, referenceImage, outputFile]
pluginHost.runPlugin("KappaIndex", args, False)

This is a Groovy script also using this tool:

def wd = pluginHost.getWorkingDirectory()
def classificationImage = wd + "class.dep"
def referenceImage = wd + "reference.dep"
def outputFile = wd + "kappa_analysis.html"
String[] args = [classificationImage, referenceImage, outputFile]
pluginHost.runPlugin("KappaIndex", args, false)

Credits: