Statistical power in operations management research

Abstract This paper discusses the need and importance of statistical power analysis in field-based empirical research in Production and Operations Management (POM) and related disciplines. The concept of statistical power analysis is explained in detail and its relevance in designing and conducting empirical experiments is discussed. Statistical power reflects the degree to which differences in sample data in a statistical test can be detected. A high power is required to reduce the probability of failing to detect an effect when it is present. This paper also examines the relationship between statistical power, significance level, sample size and effect size. A probability tree analysis further explains the importance of statistical power by showing the relationship between Type II errors and the probability of making wrong decisions in statistical analysis. A power analysis of 28 articles (524 statistical tests) in the Journal of Operations Management and in Decision Sciences shows that 60% of empirical studies do not have high power levels. This means that several of these tests will have a low degree of repeatability. This and other similar issues involving statistical power will become increasingly important as empirical studies in POM study relatively smaller effects.

[1]  Helena C. Kraemer,et al.  A Strategy to Teach the Concept and Application of Power of Statistical Tests , 1985 .

[2]  L. J. Chase,et al.  A statistical power analysis of applied psychological research. , 1976 .

[3]  J. Meredith,et al.  Alternative research paradigms in operations , 1989 .

[4]  Kwasi Amoako-Gyampah,et al.  The operations management research agenda: An update , 1989 .

[5]  E. S. Pearson Biometrika tables for statisticians , 1967 .

[6]  J. Reed,et al.  Statistical proof in inconclusive 'negative' trials. , 1981, Archives of internal medicine.

[7]  L. J. Chase,et al.  Statistical Power: Derivation, Development, and Data-Analytic Implications , 1976 .

[8]  L. J. Chase,et al.  Communication disorders: a power analytic assessment of recent research. , 1975, Journal of communication disorders.

[9]  Masoud Hemmasi,et al.  Statistical Power in Contemporary Management Research , 1987 .

[10]  Kenneth J. Ottenbacher,et al.  Statistical Power and Research in Occupational Therapy , 1982 .

[11]  Michael L. Stoloff,et al.  Computer use in psychology : a directory of software , 1987 .

[12]  Jacob Cohen,et al.  The statistical power of abnormal-social psychological research: a review. , 1962, Journal of abnormal and social psychology.

[13]  Stephen Dubin How many subjects? Statistical power analysis in research , 1990 .

[14]  W. A. Wagenaar Note on the construction of digram-balanced Latin squares. , 1969 .

[15]  Wanda J. Orlikowski,et al.  The Problem of Statistical Power in MIS Research , 1989, MIS Q..

[16]  James K. Brewer,et al.  On the Power of Statistical Tests in the American Educational Research Journal 1 , 1972 .

[17]  Gerard E. Dallal,et al.  PC-SIZE: A Program for Sample-Size Determinations , 1986 .

[18]  E. S. Pearson,et al.  Charts of the power function for analysis of variance tests, derived from the non-central F-distribution. , 1951, Biometrika.

[19]  S. Maxwell,et al.  A comparison of various strength of association measures commonly used in gerontological research. , 1983, Journal of gerontology.

[20]  Jacob Cohen Statistical Power Analysis for the Behavioral Sciences , 1969, The SAGE Encyclopedia of Research Design.

[21]  John Charles Goodale Accounting for individual productivity in labor tour scheduling , 1996 .

[22]  Richard Goldstein Vice President Power and Sample Size via MS/PC-DOS Computers , 1989 .

[23]  Mark W. Lipsey,et al.  Design Sensitivity: Statistical Power for Experimental Research. , 1989 .

[24]  Masoud Hemmasi,et al.  Assessment of statistical power in contemporary strategy research , 1987 .

[25]  Richard B. Chase,et al.  Operations Management: A Field Rediscovered , 1987 .

[26]  Jacob Cohen Statistical Power Analysis , 1992 .

[27]  P. Swamidass Empirical Science: New Frontier in Operations Management Research , 1991 .

[28]  W. Hays Experimental Design: Procedures for the Behavioral Sciences. 2nd ed. , 1983 .

[29]  Alan G. Sawyer,et al.  Statistical Power and Effect Size in Marketing Research , 1981 .

[30]  R. Kirk Experimental Design: Procedures for the Behavioral Sciences , 1970 .

[31]  G. Keppel,et al.  Design and Analysis: A Researcher's Handbook , 1976 .

[32]  Michael C. Corballis,et al.  Beyond tests of significance: Estimating strength of effects in selected ANOVA designs. , 1969 .

[33]  Helena Chmura Kraemer,et al.  How Many Subjects? Statistical Power Analysis in Research , 1987 .

[34]  Jacob Cohen,et al.  Statistical power analysis : a computer program , 1988 .

[35]  Dennis E. Hinkle,et al.  How Large Should the Sample be? A Question with no Simple Answer? or... , 1983 .

[36]  Barbara B. Flynn,et al.  Empirical research methods in operations management , 1990 .

[37]  P. Lachenbruch,et al.  Design Sensitivity: Statistical Power for Experimental Research. , 1989 .

[38]  R L Levenson Statistical power analysis: implications for researchers, planners, and practitioners in gerontology. , 1980, The Gerontologist.

[39]  Stanley J. Baran,et al.  An Assessment of Quantitative Research in Mass Communication , 1976 .

[40]  Richard B. Chase,et al.  A classification and evaluation of research in operations management , 1980 .

[41]  P. Swamidass,et al.  Assessing Operations Management from a Strategic Perspective , 1989 .

[42]  Collin J. Watson,et al.  Statistics for Management and Economics , 1991 .

[43]  Jacob Cohen,et al.  A power primer. , 1992, Psychological bulletin.