**-a**- Do not align input region with input maps
**-c**- Use circular mask
**--overwrite**- Allow output files to overwrite existing files
**--help**- Print usage summary
**--verbose**- Verbose module output
**--quiet**- Quiet module output
**--ui**- Force launching GUI dialog

**input**=*name[,**name*,...]**[required]**- Name of input raster map(s)
**output**=*name[,**name*,...]**[required]**- Name for output raster map(s)
**method**=*string[,**string*,...]- Change assessment
- Options:
*pc, gain1, gain2, gain3, ratio1, ratio2, ratio3, gini1, gini2, gini3, dist1, dist2, dist3, chisq1, chisq2, chisq3* - Default:
*ratio3* **pc**: proportion of changes**gain1**: Information gain for category distributions**gain2**: Information gain for size distributions**gain3**: Information gain for category and size distributions**ratio1**: Information gain ratio for category distributions**ratio2**: Information gain ratio for size distributions**ratio3**: Information gain ratio for category and size distributions**gini1**: Gini impurity for category distributions**gini2**: Gini impurity for size distributions**gini3**: Gini impurity for category and size distributions**dist1**: Statistical distance for category distributions**dist2**: Statistical distance for size distributions**dist3**: Statistical distance for category and size distributions**chisq1**: CHI-square for category distributions**chisq2**: CHI-square for size distributions**chisq3**: CHI-square for category and size distributions**size**=*integer*- Window size (cells)
- Default:
*40* **step**=*integer*- Processing step (cells)
- Default:
*40* **alpha**=*float*- Alpha for general entropy
- Default = 1 for Shannon Entropy
- Default:
*1*

* r.change.info* moves a processing window over the

The measures *information gain*, *information gain
ratio*, *CHI-square* and *Gini-impurity* are commonly
used in decision tree modelling (Quinlan 1986) to compare
distributions. These measures as well as the statistical distance are
based on landscape structure and are calculated for the distributions
of patch categories and/or patch sizes. A patch is a contiguous block
of cells with the same category (class), for example a forest fragment.
The proportion of changes is based on cell changes in the current
landscape.

**1. Distributions over categories (e.g land cover class)**- This provides information about changes in categories (e.g land cover class), e.g. if one category becomes more prominent. This detects changes in category composition.
**2. Distributions over size classes**- This provides information about fragmentation, e.g. if a few large fragments are broken up into many small fragments. This detects changes in fragmentation.
**3. Distributions over categories and size classes.**- This provides information about whether particular combinations of category and size class changed between input maps. This detects changes in the general landscape structure.

The numbers indicate which distribtution will be used for the selected method (see below).

A low change in category distributions and a high change in size distributions means that the frequency of categories did not change much whereas the size of patches did change.

Information gain indicates the absolute amount of information gained (to be precise, reduced uncertainty) when considering the individual input maps instead of their combination. When cells and patches are distributed over a large number of categories and a large number of size classes, information gain tends to over-estimate changes.

The information gain can be zero even if all cells changed, but the distributions (frequencies of occurrence) remained identical. The square root of the information gain is sometimes used as a distance measure and it is closely related to Fisher's information metric.

The gain ratio is always in the range (0, 1). A larger value means larger differences between input maps.

The Gini impurity is always in the range (0, 1) and calculated with

G = 1 - ∑ p_{i}^{2}

The methods *information gain* and *CHI square* are the
most sensitive measures, but also the most susceptible to noise. The
*information gain ratio* is less sensitive, but more robust
against noise. The *Gini impurity* is the least sensitive and
detects only drastic changes.

Methods using the category or size class distributions (*gain1*,
*gain2*, *ratio1*, *ratio2* *gini1*,
*gini2*, *dist1*, *dist2*) are less sensitive than
methods using the combined category and size class distributions
(*gain3*, *ratio3*, *gini3*, *dist3*).

For a thorough change assessment it is recommended to calculate different change assessment measures (at least information gain and information gain ratio) and investigate the differences between these change assessments.

H = ∑ p

The entropies are here calculated with base 2 logarithms. The upper
bound of information gain is thus log_{2}(number of classes).
Classes can be categories, size classes, or unique combinations of
categories and size classes.

H

An **alpha** < 1 gives higher weight to small frequencies,
whereas an **alpha** > 1 gives higher weight to large
frequencies. This is useful for noisy input data such as the MODIS land
cover/land use products MCD12*. These data often differ only in
single-cell patches. These differences can be due to the applied
classification procedure. Moreover, the probabilities that a cell has
been assigned to class A or class B are often very similar, i.e.
different classes are confused by the applied classification procedure.
In such cases an **alpha** > 1, e.g. 2, will give less weight to
small changes and more weight to large changes, to a degree alleviating
the problem of class confusion.

MCD12Q1.A2001.Land_Cover_Type_1 MCD12Q1.A2002.Land_Cover_Type_1 MCD12Q1.A2003.Land_Cover_Type_1 ...

r.change.info in=`g.list type=rast pat=MCD12Q1.A*.Land_Cover_Type_1 sep=,` \ method=pc,gain1,gain2,ratio1,ratio2,dist1,dist2 out=MCD12Q1.pc,MCD12Q1.gain1,MCD12Q1.gain2,MCD12Q1.ratio1,MCD12Q1.ratio2,MCD12Q1.dist1,MCD12Q1.dist2 \ radius=20 step=40 alpha=2

- Quinlan, J.R. 1986. Induction of decision trees. Machine Learning 1: 81-106. DOI:10.1007/BF00116251
- Rényi, A. 1961. On measures of information and entropy. Proceedings of the fourth Berkeley Symposium on Mathematics, Statistics and Probability 1960: 547-561.
- Shannon, C.E. 1948. A Mathematical Theory of Communication. Bell System Technical Journal 27(3): 379-423. DOI:10.1002/j.1538-7305.1948.tb01338.x

*Last changed: $Date$*

Available at: r.change.info source code (history)

Main index | Raster index | Topics index | Keywords index | Graphical index | Full index

© 2003-2019 GRASS Development Team, GRASS GIS 7.6.2svn Reference Manual