A recent survey of simulation studies concluded that an overwhelming majority of papers do not report a rationale for the number of iterations carried out in Monte Carlo robustness (MCR) experiments. The survey suggested that researchers might benefit from adopting a hypothesis testing strategy in the planning and reporting of simulation studies. This paper presents a table of the number of iterations necessary to detect departures from a series of nominal Type I error rates based upon hypothesis testing logic. The table is indexed by effect size, by significance level, and by power level for the two-tailed test that a proportion equals some constant. An alternative approach based upon the construction of a confidence interval is discussed and dismissed. The MCR research design demands an adequate definition of robustness and a sufficient sample size to detect departures from that definition. (Author/TJH) monoommommommoommommommommommmommommommommommom Reproductions supplied by EDRS are the best that can be made from the original document. 30000000000800000000000000000000000000000000000000000000000000000000000(
[1]
T. Cook,et al.
Quasi-experimentation: Design & analysis issues for field settings
,
1979
.
[2]
Walter W. Hauck,et al.
A Survey regarding the Reporting of Simulation Studies
,
1984
.
[3]
G. Glass,et al.
Consequences of Failure to Meet Assumptions Underlying the Fixed Effects Analyses of Variance and Covariance
,
1972
.
[4]
Scott E. Maxwell,et al.
Robustness of the Quasi F statistic to violations of sphericity.
,
1986
.
[5]
Ronald C. Serlin,et al.
QUANTITATIVE METHODS IN PSYCHOLOGY Comparison of ANOVA Alternatives Under Variance Heterogeneity and Specific Noncentrality Structures
,
1986
.
[6]
P. Lachenbruch.
Statistical Power Analysis for the Behavioral Sciences (2nd ed.)
,
1989
.