Statistical power, the probability of rejecting the null hypothesis, plays a critical role in empirical research. This report suggests why power is of special importance to English Education researchers, specifies procedures for calculating power, and assesses the power of empirical research published in Research in the Teaching of English during a three-year period. Results of the analysis indicate English Education research to be of low power but relative to other fields, comparatively high. Three recommendations for the adequate reporting of research results are presented. In recent years there has been a substantial increase in empirically based research about writing. It is probably fair to say, in fact, that today one of the more popular ways of studying writing is grounded in statistical analyses of data samples. When appropriately used, statistical inference can provide strong and compelling support for claims about writing and writing-related activities. Yet when statistics are inappropriately applied or when studies are designed without sufficient concern for the adequacy of procedures, conclusions can be inherently flawed. Thus, it is important for researchers, and those who seek to interpret research, to be conscious of methodological issues that can potentially affect the adequacy of empirical work. This report examines one methodological consideration present in any empirical research attempt. The topic is statistical power the ability of a statistical procedure to detect the presence of an effect if that effect is truly present. In its simplest form, statistical power is the probability of correctly rejecting the null hypothesis. Computationally, it ranges from 0 to 1.0. Perfect power is 1.0. Adequate power permits the investigator to draw accurate conclusions about a hypothesis. Inadequate power precludes any firm, empirically based, inferences of either effect or no effect. Statistical power is a major concern in quantitative writing research for three reasons. First, the lack of statistical power increases the chances of Type II or beta error: the probability of retaining the null hypothesis when, in fact, it ought to be rejected. Power is defined as I -beta. The greater the power, the less likely one is to commit a Type II error. This sort of error creates special problems in exploratory studies where procedures are often iniResearch in the Teaching of English, Vol. 17, No. 2, May 1983
[1]
James K. Brewer,et al.
On the Power of Statistical Tests in the American Educational Research Journal 1
,
1972
.
[2]
H. Cooper.
On the significance of effects and the effects of significance.
,
1981
.
[3]
H. Friedman.
Magnitude of experimental effect and a table for its rapid estimation.
,
1968
.
[4]
Jacob Cohen,et al.
Statistical Power Analysis and Research Results.
,
1973
.
[5]
Jacob Cohen,et al.
The statistical power of abnormal-social psychological research: a review.
,
1962,
Journal of abnormal and social psychology.
[6]
W. Hays.
Statistics for the social sciences
,
1973
.
[7]
G. Keppel,et al.
Design and Analysis: A Researcher's Handbook
,
1976
.
[8]
F. Massey,et al.
Introduction to Statistical Analysis
,
1970
.
[9]
L. J. Chase,et al.
Communication disorders: a power analytic assessment of recent research.
,
1975,
Journal of communication disorders.
[10]
L. J. Chase,et al.
A statistical power analysis of applied psychological research.
,
1976
.