Randomized Classroom Trials on Trial

Publisher Summary In Professor Plum's testimony, he provides three essential classroom-based intervention research arguments. This chapter emphasizes on these arguments. In the prototypical instructional intervention study, two different teachers in two different classrooms compare two different methods of instruction after their implementation. Random assignment of students to classrooms is not a critical characteristic of scientifically credible classroom-based instructional intervention studies. A problematic feature of the prototypical instructional intervention study, which must be dealt with, relates to the manner in which the instructional treatments are administered. The unit interdependence/interactivity problem extends to situations in which schools are randomly assigned to receive different instructional methods, but all classrooms within a particular school receive the same method. The two major areas of concern regarding educational intervention research, namely, how students/classrooms are typically assigned to experimental conditions and how classroom-based treatments are administered, have direct implications for how the data from such studies are analyzed statistically. The major concern is that if interdependence among experimental observations/measures exists because of questionable unit-assignment and/or treatment-administration practices, commonly applied methods of statistical analysis are more than simply inappropriate. The chapter illustrates ten ideal characteristics of educational intervention research, which include: problem focused, theoretically grounded, data-based, psychometrically sound, representative, randomized, carefully implemented, properly analyzed, replicable, and transportable.

[1]  Bruce E. Wampold,et al.  Randomization tests for multiple-baseline designs. , 1986 .

[2]  T. Cook,et al.  Quasi-experimentation: Design & analysis issues for field settings , 1979 .

[3]  D Moher,et al.  The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. , 2001, Annals of internal medicine.

[4]  J. Epstein,et al.  Outcomes of an Emergent Literacy Intervention in Head Start. , 1994 .

[5]  T Kratochwill,et al.  A further consideration in the application of an analysis-of-variance model for the intrasubject replication design. , 1974, Journal of applied behavior analysis.

[6]  W. G. Cochran,et al.  Some consequences when the assumptions for the analysis of variance are not satisfied. , 1947, Biometrics.

[7]  B. Thompson Research news and Comment: AERA Editorial Policies Regarding Statistical Significance Testing: Three Suggested Reforms , 1996 .

[8]  J. Levin,et al.  Reflections on Statistical and Substantive Significance, with a Slice of Replication. , 1997 .

[9]  Samuel M. Bower,et al.  A One-Way Analysis of Variance for Single-Subject Designs , 1971 .

[10]  Mitchell H. Gail,et al.  Community Intervention Trial for Smoking Cessation (COMMIT): I. cohort results from a four-year community intervention. , 1995, American journal of public health.

[11]  Random Thoughts on the (In)credibility of Educational-Psychological Intervention Research , 2004 .

[12]  Joel R. Levin,et al.  Some Methodological and Statistical “Bugs” in Research on Children’s Learning , 1985 .

[13]  Carl F. Kaestle,et al.  Research News and comment: The Awful Reputation of Education Research , 1993 .

[14]  S. L. Mann,et al.  Experimental design and methods for school-based randomized trials. Experience from the Hutchinson Smoking Prevention Project (HSPP). , 2000, Controlled clinical trials.

[15]  J. Levin,et al.  An Empirical Investigation of the Statistical Properties of Generalized Single-Case Randomization Tests. , 2004 .

[16]  A. Kristal,et al.  A dietary intervention in primary care practice: the Eating Patterns Study. , 1997, American journal of public health.

[17]  David J. Francis,et al.  How Letter-Sound Instruction Mediates Progress in First-Grade Reading and Spelling , 1991 .

[18]  S. Green The Eating Patterns Study--the importance of practical randomized trials in communities. , 1997, American journal of public health.

[19]  Carl F. Kaestle,et al.  The Awful Reputation of Education Research. , 1993 .

[20]  Robert J. Stevens,et al.  The effects of cooperative learning and direct instruction in reading comprehension strategies on main idea identification. , 1991 .

[21]  D. Campbell,et al.  EXPERIMENTAL AND QUASI-EXPERIMENT Al DESIGNS FOR RESEARCH , 2012 .

[22]  W. Shadish,et al.  Experimental and Quasi-Experimental Designs for Generalized Causal Inference , 2001 .

[23]  K. Dodge,et al.  Initial impact of the Fast Track prevention trial for conduct problems: II. Classroom effects. Conduct Problems Prevention Research Group. , 1999, Journal of consulting and clinical psychology.

[24]  Susan Morrissey,et al.  BUILDING ON SUCCESS , 2006 .

[25]  Joel R. Levin,et al.  What to do about educational research's credibility gaps? , 1999 .

[26]  J. Levin,et al.  What time-series designs may have to offer educational researchers☆ , 1978 .

[27]  Educational research in the 21st century: Lessons from the 20th , 1999 .

[28]  B. Wampold The Great Psychotherapy Debate: Models, Methods, and Findings , 2001 .

[29]  Sharon J. Derry,et al.  Fostering Students' Statistical and Scientific Thinking: Lessons Learned From an Innovative College Course , 2000 .

[30]  Larry E. Toothaker,et al.  N = 1 Designs: The Failure of ANOVA-Based Tests , 1983 .

[31]  Bracha Kramarski,et al.  Enhancing Mathematical Reasoning in the Classroom: The Effects of Cooperative Learning and Metacognitive Training , 2003 .

[32]  M. Minasi,et al.  Prove it , 1990 .

[33]  Joel R. Levin,et al.  What To Do About Educational Research’s Credibility Gaps? , 2000, Journal of Cognitive Education and Psychology.

[34]  Joel R. Levin,et al.  What If There Were No More Bickering About Statistical Significance Tests , 1998 .

[35]  J. Gentile,et al.  An analysis-of-variance model for the intrasubject replication design. , 1972, Journal of applied behavior analysis.

[36]  D. A. Kenny,et al.  Consequences of violating the independence assumption in analysis of variance. , 1986 .

[37]  D. Moher,et al.  The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials , 2001, The Lancet.

[38]  Daniel H. Robinson,et al.  Research news and Comment: Reflections on Statistical and Substantive Significance, With a Slice of Replication , 1997 .

[39]  Joel R. Levin,et al.  Assessing the classroom potential of the keyword method. , 1979 .

[40]  Robert S. Barcikowski,et al.  Statistical Power with Group Mean as the Unit of Analysis , 1981 .

[41]  A. Stone Editorial: Modification to “Instructions to authors”. , 2003 .

[42]  P. McCardle,et al.  The voice of evidence in reading research. , 2004 .

[43]  Educational researchs credibility gaps, in closing , 1999 .

[44]  Angela M. O'Donnell,et al.  Educational/Psychological Intervention Research , 2003 .

[45]  Joel R. Levin,et al.  On Research in Classrooms. , 1992 .

[46]  J. Platt Strong Inference: Certain systematic methods of scientific thinking may produce much more rapid progress than others. , 1964, Science.

[47]  G. Glass,et al.  The Experimental Unit in Statistical Analysis , 1969 .

[48]  B. Thompson Editorial Policies Regarding Statistical Significance Testing : Three Suggested Reforms , 2012 .

[49]  J. Levin,et al.  Should providers of treatment be regarded as a random factor? If it ain't broke, don't "fix" it: a comment on Siemer and Joormann (2003). , 2003, Psychological methods.

[50]  Jerome Cornfield,et al.  SYMPOSIUM ON CHD PREVENTION TRIALS: DESIGN ISSUES IN TESTING LIFE STYLE INTERVENTIONRANDOMIZATION BY GROUP: A FORMAL ANALYSIS , 1978 .

[51]  Matthew J. Koehler,et al.  Regulated Randomization: A Potentially Sharper Analytical Tool for the Multiple-Baseline Design , 1998 .

[52]  M. Wolf,et al.  Social validity: the case for subjective measurement or how applied behavior analysis is finding its heart. , 1978, Journal of applied behavior analysis.

[53]  Bruce E. Wampold,et al.  Generalized Single-Case Randomization Tests: Flexible Analyses for a Variety of Situations. , 1999 .

[54]  J. Levin,et al.  Crafting educational intervention research that's both credible and creditable , 1994 .

[55]  Anthony S. Bryk,et al.  Hierarchical Linear Models: Applications and Data Analysis Methods , 1992 .

[56]  F. Mosteller,et al.  Evidence matters : randomized trials in education research , 2002 .

[57]  Susan B. Gerber,et al.  The Enduring Effects of Small Classes. , 2001 .

[58]  M. Solmon,et al.  The Unit of Analysis in Field Research: Issues and Approaches to Design and Data Analysis , 1998 .

[59]  H. Scheffé The Analysis of Variance , 1960 .

[60]  K. Stanovich,et al.  Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research To Make Curricular & Instructional Decisions. , 2003 .

[61]  B. Wampold,et al.  The consequence of ignoring a nested factor on measures of effect size in analysis of variance. , 2000, Psychological methods.

[62]  S. Raudenbush Statistical analysis and optimal design for cluster randomized trials , 1997 .

[63]  M. Shermer,et al.  Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time , 1997 .

[64]  Thomas D. Cook,et al.  Objecting to the objections to using random assignment in educational research , 2001 .