We show that if overall sample size and effect size are held constant, the power of theF test for a one-way analysis of variance decreases dramatically as the number of groups increases. This reduction in power is even greater when the groups added to the design do not produce treatment effects. If a second independent variable is added to the design, either a split-plot or a completely randomized design may be employed. For the split-plot design, we show that the power of theF test on the betweengroups factor decreases as the correlation across the levels of the within-groups factor increases. The attenuation in between-groups power becomes more pronounced as the number of levels of the withingroups factor increases. Sample size and total cost calculations are required to determine whether the split-plot or completely randomized design is more efficient in a particular application. The outcome hinges on the cost of obtaining (or recruiting) a single subject relative to the cost of obtaining a single observation: We call this thesubject-to-observation cost (SOC) ratio. Split-plot designs are less costly than completely randomized designs only when the SOC ratio is high, the correlation across the levels of the within-groups factor is low, and the number of such levels is small.
[1]
Richard F. Haase,et al.
How significant is a significant difference? Average effect size of research in counseling psychology.
,
1982
.
[2]
G. Gigerenzer,et al.
Do studies of statistical power have an effect on the power of studies
,
1989
.
[3]
J. Rossi,et al.
Statistical power of psychological research: what have we gained in 20 years?
,
1990,
Journal of consulting and clinical psychology.
[4]
Drake R. Bradley,et al.
Statistical power in complex experimental designs
,
1996
.
[5]
B. J. Winer.
Statistical Principles in Experimental Design
,
1992
.
[6]
R. Kirk.
Experimental Design: Procedures for the Behavioral Sciences
,
1970
.
[7]
Harris Cooper,et al.
Expected Effect Sizes
,
1982
.
[8]
Drake R. Bradley.
Computer simulation with DATASIM
,
1989
.
[9]
M. Tiku.
Tables of the Power of the F-Test
,
1967
.
[10]
Jacob Cohen.
Statistical Power Analysis for the Behavioral Sciences
,
1969,
The SAGE Encyclopedia of Research Design.
[11]
Jacob Cohen,et al.
The statistical power of abnormal-social psychological research: a review.
,
1962,
Journal of abnormal and social psychology.