Karl Pearson Chi-Square Test The Dawn of Statistical Inference

Specification or stochastic modeling of data is an important step in statistical analysis of data. Karl Pearson was the first to recognize this problem and introduce a criterion, in a paper published in 1900, to examine whether the observed data support a given specification. He called it chi-square goodnessof-fit test, which motivated research in testing of hypotheses and estimation of unknown parameters and led to the development of statistics as a separate discipline. Efron (1995) says, “Karl Pearson’s famous chi-square paper appeared in the spring of 1900, an auspicious beginning to a wonderful century for the field of statistics.”

[1]  Calyampudi R. Rao Large sample tests of statistical hypotheses concerning several parameters with applications to problems of estimation , 1948, Mathematical Proceedings of the Cambridge Philosophical Society.

[2]  I. Vajda,et al.  Asymptotic divergence of estimates of discrete distributions , 1995 .

[3]  P. Greenwood,et al.  A Guide to Chi-Squared Testing , 1996 .

[4]  Joseph P. Romano A Bootstrap Revival of Some Nonparametric Distance Tests , 1988 .

[5]  H. O. Lancaster The chi-squared distribution , 1971 .

[6]  K. Pearson On the Criterion that a Given System of Deviations from the Probable in the Case of a Correlated System of Variables is Such that it Can be Reasonably Supposed to have Arisen from Random Sampling , 1900 .

[7]  A. Wald Tests of statistical hypotheses concerning several parameters when the number of observations is large , 1943 .

[8]  G. Cowan Statistical data analysis , 1998 .

[9]  C. R. Rao,et al.  On the convexity of some divergence measures based on entropy functions , 1982, IEEE Trans. Inf. Theory.

[10]  C. R. Rao,et al.  Convexity properties of entropy functions and analysis of diversity , 1984 .

[11]  W. J. Hall,et al.  On large-sample estimation and testing in parametric models , 1990 .

[12]  M. Pardo On Burbea-Rao Divergence Based Goodness-of-Fit Tests for Multinomial Models , 1999 .

[13]  R. Beran Simulated Power Functions , 1986 .

[14]  H. Ahrens Lancaster, H. O.: The Chi‐squared Distribution. Wiley & Sons, Inc., New York 1969. X, 366 S., 140 s , 1971 .

[15]  Calyampudi R. Rao Diversity and dissimilarity coefficients: A unified approach☆ , 1982 .

[16]  J. Shao,et al.  The jackknife and bootstrap , 1996 .

[17]  J. Durbin,et al.  1. Distribution Theory for Tests Based on the Sample Distribution Function , 1973 .

[18]  E. S. Pearson,et al.  ON THE USE AND INTERPRETATION OF CERTAIN TEST CRITERIA FOR PURPOSES OF STATISTICAL INFERENCE PART I , 1928 .

[19]  S. S. Wilks The Large-Sample Distribution of the Likelihood Ratio for Testing Composite Hypotheses , 1938 .

[20]  J. Durbin Distribution theory for tests based on the sample distribution function , 1973 .

[21]  C. R. Rao,et al.  On the convexity of higher order Jensen differences based on entropy functions , 1982, IEEE Trans. Inf. Theory.

[22]  Timothy R. C. Read,et al.  Goodness-Of-Fit Statistics for Discrete Multivariate Data , 1988 .

[23]  E. Lehmann Elements of large-sample theory , 1998 .