Statistical Power and Effect Sizes of Clinical Neuropsychology Research

Cohen, in a now classic paper on statistical power, reviewed articles in the 1960 issue of one psychology journal and determined that the majority of studies had less than a 5050 chance of detecting an effect that truly exists in the population, and thus of obtaining statistically significant results. Such low statistical power, Cohen concluded, was largely due to inadequate sample sizes. Subsequent reviews of research published in other experimental psychology journals found similar results. We provide a statistical power analysis of clinical neuropsychological research by reviewing a representative sample of 66 articles from the Journal of Clinical and Experimental Neuropsychology, the Journal of the International Neuropsychology Society, and Neuropsychology. The results show inadequate power, similar to that for experimental research, when Cohens criterion for effect size is used. However, the results are encouraging in also showing that the field of clinical neuropsychology deals with larger effect sizes than are usually observed in experimental psychology and that the reviewed clinical neuropsychology research does have adequate power to detect these larger effect sizes. This review also reveals a prevailing failure to heed Cohens recommendations that researchers should routinely report a priori power analyses, effect sizes and confidence intervals, and conduct fewer statistical tests.

[1]  Selby H Evans,et al.  Misuse of analysis of covariance when treatment effect and covariate are confounded. , 1968, Psychological bulletin.

[2]  P. Meehl Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. , 1978 .

[3]  R. Rosenthal,et al.  Statistical Procedures and the Justification of Knowledge in Psychological Science , 1989 .

[4]  Richard Goldstein Vice President Power and Sample Size via MS/PC-DOS Computers , 1989 .

[5]  Edgar Erdfelder,et al.  GPOWER: A general power analysis program , 1996 .

[6]  Robert Rosenthal,et al.  How are we doing in soft psychology , 1990 .

[7]  John T. E. Richardson Measures of effect size , 1996 .

[8]  Jacob Cohen,et al.  The statistical power of abnormal-social psychological research: a review. , 1962, Journal of abnormal and social psychology.

[9]  P. Lachenbruch Statistical Power Analysis for the Behavioral Sciences (2nd ed.) , 1989 .

[10]  D. Barlow On the relation of clinical research to clinical practice: current issues, new directions. , 1981, Journal of consulting and clinical psychology.

[11]  Jacob Cohen,et al.  A power primer. , 1992, Psychological bulletin.

[12]  K J Rothman,et al.  No Adjustments Are Needed for Multiple Comparisons , 1990, Epidemiology.

[13]  Jacob Cohen Statistical Power Analysis for the Behavioral Sciences , 1969, The SAGE Encyclopedia of Research Design.

[14]  D. Cicchetti Role of null hypothesis significance testing (NHST) in the design of neuropsychologic research. , 1998, Journal of clinical and experimental neuropsychology.

[15]  D. Cicchetti,et al.  Null hypothesis disrespect in neuropsychology: dangers of alpha and beta errors. , 1988, Journal of clinical and experimental neuropsychology.

[16]  Jacob Cohen The earth is round (p < .05) , 1994 .

[17]  R. Rosenthal Meta-analytic procedures for social research , 1984 .

[18]  Alan G. Sawyer,et al.  Statistical Power and Effect Size in Marketing Research , 1981 .

[19]  Jacob Cohen,et al.  THINGS I HAVE LEARNED (SO FAR) , 1990 .