Experiments for Educational Evaluation and Improvement

To help develop and improve programs and practices in U.S. schools and classrooms, current national policies strongly encourage more widespread application of rigorous research methods for evaluating what works. Although randomized experiments have been accepted and applied as the gold standard for testing and developing innovations in other fields, most notably medicine, their application to questions in education has been infrequent. This article articulates the logic of these experiments, discusses reasons for their infrequent use in education, and presents several ways that evaluators may apply experiments to the special circumstances surrounding education. If randomization is to be more widely accepted and implemented in education, the ethical and political dilemmas of withholding services must be addressed, experiments must be adapted to fit the messy and complex world of schools and classrooms, and an even stronger federal role is needed to foster and sustain experimentation and improvement of educational practices.

[1]  Judith M. Gueron,et al.  The politics of random assignment: implementing studies and impacting policy , 2008 .

[2]  S. Raudenbush,et al.  Resources, Instruction, and Research , 2003 .

[3]  Mary E. Little,et al.  Comprehensive School Reform , 2003 .

[4]  Ron Beghetto,et al.  Scientifically Based Research , 2003 .

[5]  Geoffrey D. Borman,et al.  COMPREHENSIVE SCHOOL REFORM AND STUDENT ACHIEVEMENT A Meta-Analysis , 2002 .

[6]  M. Clarke,et al.  Number and size of randomized trials reported in general health care journals from 1948 to 1997. , 2002, International journal of epidemiology.

[7]  Casting and drawing lots: a time honoured way of dealing with uncertainty and ensuring fairness , 2001, BMJ : British Medical Journal.

[8]  Geoffrey D. Borman,et al.  Can a Summer Intervention Program Using Trained Volunteer Teachers Narrow the Achievement Gap? First-Year Results from a Multi-Year Study. , 2001 .

[9]  Thomas D. Cook,et al.  Objecting to the objections to using random assignment in educational research , 2001 .

[10]  Robert E. Slavin,et al.  Success for All : research and reform in elementary education , 2001 .

[11]  Bengt Muthén,et al.  Second-generation structural equation modeling with a combination of categorical and continuous latent variables: New opportunities for latent class–latent growth modeling. , 2001 .

[12]  T. Cook Sciencephobia: Why education researchers reject randomized experiments , 2001 .

[13]  Robert E. Slavin,et al.  One Million Children: Success for All , 2000 .

[14]  Thomas D. Cook,et al.  Comer's School Development Program in Chicago: A Theory-Based Evaluation , 2000 .

[15]  Vicki E. Snider,et al.  Effective Mathematics Instruction the Importance of Curriculum , 2000 .

[16]  R. Boruch,et al.  The Importance of Randomized Field Trials , 2000 .

[17]  M. Phillips,et al.  History and Educational Policymaking , 2000 .

[18]  Thomas D. Cook,et al.  Comer's School Development Program in Prince George's County, Maryland: A Theory-Based Evaluation , 1999 .

[19]  C. McGuire,et al.  Office of Educational Research and Improvement , 1999 .

[20]  J. Witte The Milwaukee Voucher Experiment , 1998 .

[21]  M. Chassin Is health care ready for Six Sigma quality? , 1998, The Milbank quarterly.

[22]  David F. Labaree,et al.  Educational Researchers: Living With a Lesser Form of Knowledge , 1998 .

[23]  R Doll,et al.  Controlled trials: the 1948 watershed , 1998, BMJ.

[24]  William G. Howell,et al.  An Evaluation of the New York City School Choice Scholarships Program: The First Year. , 1998 .

[25]  William G. Howell,et al.  Initial Findings from an Evaluation of School Choice Programs in Washington, D. C. , 1998 .

[26]  Olatokunbo S. Fashola,et al.  Show Me the Evidence!: Proven and Promising Programs for America's Schools , 1998 .

[27]  R. Little,et al.  Statistical Techniques for Analyzing Data from Prevention Trials: Treatment of No-Shows Using Rubin's Causal Model , 1998 .

[28]  Robert E. Slavin,et al.  Research news and Comment: Design Competitions: A Proposal for a New Federal Role in Educational Research and Development , 1997 .

[29]  R. Slavin Design Competitions: A Proposal for a New Federal Role in Educational Research and Development , 1997 .

[30]  Frederick Mosteller,et al.  Sustained Inquiry in Education: Lessons from Skill Grouping and Class Size , 1996 .

[31]  Geoffrey D. Borman,et al.  Title I and Student Achievement: A Meta-Analysis of Federal Evaluation Results , 1996 .

[32]  James J. Kemple,et al.  Career Academies. Early Implementation Lessons from a 10-Site Evaluation. , 1996 .

[33]  P. R. Rosembaum,et al.  Identification of Causal Effects Using Instrumental Variables: Comment , 1996 .

[34]  Mark W. Lipsey,et al.  The efficacy of psychological, educational, and behavioral treatment. Confirmation from meta-analysis. , 1993, The American psychologist.

[35]  Rebecca Barr,et al.  How Schools Work , 1991 .

[36]  J. Guthrie Education R&D's Lament (And What to Do about It) , 1990 .

[37]  James W. Guthrie Research News and Comment: Education R&D’s Lament (and What to Do About It) , 1990 .

[38]  J. Finn,et al.  Answers and Questions About Class Size: A Statewide Experiment , 1990 .

[39]  James J. Heckman,et al.  Causal Inference and Nonrandom Samples , 1989 .

[40]  P. Holland Statistics and Causal Inference , 1985 .

[41]  D. Rubin,et al.  The central role of the propensity score in observational studies for causal effects , 1983 .

[42]  E. Richardson An Assessment of Two Methods for Remediating Reading Deficiencies. , 1978 .

[43]  F. Mosteller,et al.  STATISTICS AND ETHICS IN SURGERY AND ANESTHESIA , 1978 .

[44]  F. E. Ellis,et al.  Schoolteacher , 1975 .

[45]  D. Rubin Estimating causal effects of treatments in randomized and nonrandomized studies. , 1974 .

[46]  S. Zdep EDUCATING DISADVANTAGED URBAN CHILDREN IN SUBURBAN SCHOOLS: AN EVALUATION1 , 1970 .

[47]  Y. Morrison presented at the Annual Meeting of the , 1970 .

[48]  Donald T. Campbell,et al.  Reforms as Experiments , 1969 .