Adjustments for making multiple comparisons in large bodies of data are recommended to avoid rejecting the null hypothesis too readily. Unfortunately, reducing the type I error for null associations increases the type II error for those associations that are not null. The theoretical basis for advocating a routine adjustment for multiple comparisons is the “universal null hypothesis” that “chance” serves as the first-order explanation for observed phenomena. This hypothesis undermines the basic premises of empirical research, which holds that nature hollows regular laws that may he studied through observations. A policy of not making adjustments for multiple comparisons is preferable because it will lead to fewer errors of interpretation when the data under evaluation are not random numbers but actual observations on nature. Furthermore, scientists should not he so reluctant to explore leads that may turn out to he wrong that they penalize themselves by missing possibly important findings.
[1]
G Kolata,et al.
What Does It Mean to Be Random?: A new theory of randomness multipliers may explain what randomness is and why most phenomena that are thought to be random are not so random after all.
,
1986,
Science.
[2]
Frederick Mosteller,et al.
Exploring Data Tables, Trends and Shapes.
,
1986
.
[3]
A. W. Kemp,et al.
Medical Uses of Statistics.
,
1994
.
[4]
M. J. Gardner,et al.
Using the Environment to Explain and Predict Mortality
,
1973
.
[5]
W. D. Wightman.
Scientific Method
,
1932,
Nature.
[6]
J. Simpson,et al.
The Oxford English Dictionary
,
1884
.
[7]
R. Anderson,et al.
Comparing the means of several groups.
,
1986,
The New England journal of medicine.
[8]
David B. Pillemer,et al.
Summing Up: The Science of Reviewing Research
,
1984
.
[9]
J. R. Newman.
The World of Mathematics
,
1961
.