The consequence of ignoring a nested factor on measures of effect size in analysis of variance.

Although the consequences of ignoring a nested factor on decisions to reject the null hypothesis of no treatment effects have been discussed in the literature, typically researchers in applied psychology and education ignore treatment providers (often a nested factor) when comparing the efficacy of treatments. The incorrect analysis, however, not only invalidates tests of hypotheses, but it also overestimates the treatment effect. Formulas were derived and a Monte Carlo study was conducted to estimate the degree to which the F statistic and treatment effect size measures are inflated by ignoring the effects due to providers of treatments. These untoward effects are illustrated with examples from psychotherapeutic treatments.

[1]  B. Wampold Outcomes of individual counseling and psychotherapy: Empirical evidence addressing two fundamental questions. , 2000 .

[2]  R. DeRubeis,et al.  Empirically supported individual and group psychological treatments for adult mental disorders. , 1998, Journal of consulting and clinical psychology.

[3]  K. Howard,et al.  Trials and Tribulations in the Meta-Analysis of Treatment Differences: Comment on Wampold et al. (1997) , 1997 .

[4]  Frederick A. Stich,et al.  A meta-analysis of outcome studies comparing bona fide psychotherapies: Empiricially, "all must have prizes." , 1997 .

[5]  R. Grissom,et al.  The magical number .7 +/- .2: meta-meta-analysis of the probability of superior outcome in comparisons involving therapy, placebo, and control. , 1996, Journal of consulting and clinical psychology.

[6]  L. Hedges,et al.  The Handbook of Research Synthesis , 1995 .

[7]  Roger E. Kirk,et al.  Experimental design: Procedures for the behavioral sciences (3rd ed.). , 1995 .

[8]  Jacob Cohen The earth is round (p < .05) , 1994 .

[9]  C. Honts,et al.  Analysis of Variance Versus Bootstrap Procedures for Analyzing Dependent Observations in Small Group Research , 1994 .

[10]  G. Fenton,et al.  Cognitive Therapy, Analytic Psychotherapy and Anxiety Management Training for Generalised Anxiety Disorder , 1994, British Journal of Psychiatry.

[11]  A. Bergin,et al.  The effectiveness of psychotherapy. , 1994 .

[12]  R. Rosenthal Parametric measures of effect size. , 1994 .

[13]  A. Beck,et al.  Meta‐Analysis of Therapist Effects in Psychotherapy Outcome Studies , 1991 .

[14]  J. Mintz,et al.  Implications of therapist effects for the design and analysis of comparative studies of psychotherapies. , 1991, Journal of consulting and clinical psychology.

[15]  P. Lachenbruch Statistical Power Analysis for the Behavioral Sciences (2nd ed.) , 1989 .

[16]  W. Hays Statistics, 4th ed. , 1988 .

[17]  Windy Dryden,et al.  Handbook of Psychotherapy and Behavior Change , 1987, Journal of Cognitive Psychotherapy.

[18]  D. A. Kenny,et al.  Consequences of violating the independence assumption in analysis of variance. , 1986 .

[19]  L. Hedges,et al.  Statistical Methods for Meta-Analysis , 1987 .

[20]  Robert W. Lent,et al.  Handbook of Counseling Psychology , 1984 .

[21]  Robert S. Barcikowski,et al.  Statistical Power with Group Mean as the Unit of Analysis , 1981 .

[22]  Jacob Cohen,et al.  Applied multiple regression/correlation analysis for the behavioral sciences , 1979 .

[23]  R. Kirk Experimental Design: Procedures for the Behavioral Sciences , 1970 .

[24]  J. Walsh Concerning the Effect of Intraclass Correlation on Certain Significance Tests , 1947 .