The high cost of complexity in experimental design and data analysis: Type I and type II error rates in multiway ANOVA

The availability of statistical software packages has led to a sharp increase in use of complex research designs and complex statistical analyses in communication research. An informal examination of studies from 2 leading communication journals suggests that the analysis of variance (ANOVA) is often the statistic of choice, and a substantial proportion of published research reports using ANOVA employ complex (k ≥ 3) factorial designs, often involving multiple dependent variables. This article reports a series of Monte Carlo simulations which demonstrate that this complexity may come at a heavier cost than many communication researchers realize. As frequently used, complex factorial ANOVA yield Type I and Type II error rates that many communication scholars would likely consider unacceptable. Consequently, quality of statistical inference in many studies is highly suspect. Communication researchers are warned about problems associated with design and statistical complexity and solutions are suggested.

[1]  L. Harlow,et al.  What if there were no significance tests , 1997 .

[2]  R. Rosenthal,et al.  Contrast Analysis: Focused Comparisons in the Analysis of Variance , 1985 .

[3]  Jacob Cohen The earth is round (p < .05) , 1994 .

[4]  Eugene S. Edgington,et al.  A new tabulation of statistical procedures used in APA journals. , 1974 .

[5]  R. Abelson Statistics As Principled Argument , 1995 .

[6]  Jacob Cohen Statistical Power Analysis for the Behavioral Sciences , 1969, The SAGE Encyclopedia of Research Design.

[7]  Jacob Cohen,et al.  The statistical power of abnormal-social psychological research: a review. , 1962, Journal of abnormal and social psychology.

[8]  Harold J. Fletcher,et al.  Controlling Multiple F Test Errors with an Overall F Test , 1989 .

[9]  Jacob Cohen,et al.  Applied multiple regression/correlation analysis for the behavioral sciences , 1979 .

[10]  N. Kerr HARKing: Hypothesizing After the Results are Known , 1998, Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc.

[11]  R. Nickerson,et al.  Null hypothesis significance testing: a review of an old and continuing controversy. , 2000, Psychological methods.

[12]  S. Olejnik,et al.  Top Ten Reasons Why Most Omnibus ANOVA F-Tests Should Be Abandoned. , 1997 .

[13]  J. Hunter Needed: A Ban on the Significance Test , 1997 .

[14]  G. Gigerenzer,et al.  Do studies of statistical power have an effect on the power of studies , 1989 .

[15]  L. Goodwin,et al.  Statistical Techniques in AERJ Articles, 1979–1983: The Preparation of Graduate Students to Read the Educational Research Literature , 1985 .

[16]  A. Agresti,et al.  Statistical Methods for the Social Sciences , 1979 .

[17]  T. Levine,et al.  Eta Squared, Partial Eta Squared, and Misreporting of Effect Size in Communication Research , 2002 .

[18]  Neil Thomason,et al.  Reporting of statistical inference in the Journal of Applied Psychology : Little evidence of reform. , 2001 .

[19]  Jacob Cohen,et al.  THINGS I HAVE LEARNED (SO FAR) , 1990 .

[20]  Thomas M. Steinfatt THE ALPHA PERCENTAGE AND EXPERIMENTWISE ERROR RATES IN COMMUNICATION RESEARCH , 1979 .

[21]  Rory A. Fisher,et al.  Statistical methods and scientific inference. , 1957 .

[22]  Neil Thomason,et al.  Colloquium on Effect Sizes: the Roles of Editors, Textbook Authors, and the Publication Manual , 2001 .

[23]  G. Keppel,et al.  Design and Analysis: A Researcher's Handbook , 1976 .

[24]  Leland Wilkinson,et al.  Statistical Methods in Psychology Journals Guidelines and Explanations , 2005 .

[25]  A REASSESSMENT OF STATISTICAL POWER ANALYSIS IN HUMAN COMMUNICATION RESEARCH , 1979 .

[26]  P. Pollard,et al.  On the probability of making Type I errors. , 1987 .