r.kappa calculates the error matrix of the two map layers and prepares the table from which the report is to be created. kappa values for overall and each classes are computed along with their variances. Also percent of commission and omission error, total correct classified result by pixel counts, total area in pixel counts and percentage of overall correctly classified pixels are tabulated.
The report will be written to an output file which is in plain text format and named by user at prompt of running the program. To obtain machine readable version, specify a json output format.
The body of the report is arranged in panels. The classified result map layer categories is arranged along the vertical axis of the table, while the reference map layer categories along the horizontal axis. Each panel has a maximum of 5 categories (9 if wide format) across the top. In addition, the last column of the last panel reflects a cross total of each column for each row. All of the categories of the map layer arranged along the vertical axis, i.e., the reference map layer, are included in each panel. There is a total at the bottom of each column representing the sum of all the rows in that column.
All output variables (except kappa variance) have been validated to produce correct values in accordance to formulas given by Rossiter, D.G., 2004. "Technical Note: Statistical methods for accuracy assessment of classified thematic maps".
It is recommended to reclassify categories of classified result map layer into a more manageable number before running r.kappa on the classified raster map layer. Because r.kappa calculates and then reports information for each and every category.
NA's in output mean it was not possible to calculate the value (e.g. calculation would involve division by zero). In JSON output NA's are represented with value null. If there is no overlap between both maps, a warning is printed and output values are set to 0 or null respectively.
The Estimated kappa value in r.kappa is the value only for one class, i.e. the observed agreement between the classifications for those observations that have been classified by classifier 1 into the class i. In other words, here the choice of reference is important.
It is calculated as:
kpp[i] = (pii[i] - pi[i] * pj[i]) / (pi[i] - pi[i] * pj[i]);
where=
Some of reported values (overall accuracy, Choen's kappa, MCC) can be misleading if cell count among classes is not balanced. See e.g. Powers, D.M.W., 2012. "The Problem with Kappa"; Zhu, Q., 2020. "On the performance of Matthews correlation coefficient (MCC) for imbalanced dataset".
g.region raster=landclass96 -p r.kappa -w classification=landuse96_28m reference=landclass96 # export Kappa matrix as CSV file "kappa.csv" r.kappa classification=landuse96_28m reference=landclass96 output=kappa.csv -m -h
Verification of classified LANDSAT scene against training areas:
r.kappa -w classification=lsat7_2002_classes reference=training
Available at: r.kappa source code (history)
Latest change: Saturday May 25 03:15:08 2024 in commit: 55b2a2bcccd3fd295e4f20c3e49f6984d61982d3
Main index | Raster index | Topics index | Keywords index | Graphical index | Full index
© 2003-2024 GRASS Development Team, GRASS GIS 8.4.1dev Reference Manual