## Testing whether two distributions are different

Use the chi-square test to test whether two distributions are different.

The chi-square test is

where:

observed data in bin *i*

expected data in bin *i*

The above can be used directly when comparing a set of observations with a known (expected) distribution. In this case the number of degrees of freedom is equal to the number of bins.

Given two sets of binned data *A* and *B*, the expected value in each bin of each set is its proportion of the total, i.e.:

where is the total number of samples in set *A*, etc.

Thus the chi-square statistic is

which can also be written:

If the total number of samples in each set is the same, i.e. , then this simplifies down to:

The number of degrees of freedom is (number of bins – 1).

**Testing against a significance level**

Choose a confidence level and look up the inverse chi square cumulative distribution for the given number of degrees of freedom, e.g. at 95% confidence and 1 degree of freedom, the threshold is . If , then it can be said with the given level of confidence that the distributions differ.

Since the chi square distribution is strictly the probability that the sum of the squares of *normal* random variables would exceed the given value, this test should only be used when there are enough samples to assume a normal distribution. It will normally be acceptable so long as no more than 10% of the events have expected frequencies below 5. Where there is only 1 degree of freedom, the approximation is not reliable if expected frequencies are below 10.

**Code Pointers **

Octave – chisquare_inv, chisquare_test_homogeneity

Perl – Statistics::Distributions

Spreadsheet – chiinv

**References**