Randomized Controlled Studies and Alternative Designs in Outcome Studies

This article reviews several decades of the author’s meta-analytic and experimental research on the conditions under which nonrandomized experiments can approximate the results from randomized experiments (REs). Several studies make clear that we can expect accurate effect estimates from the regression discontinuity design, though its statistical power is lower, it estimates a different parameter than the RE, and its analysis is considerably more complex. For other nonrandomized designs, the picture is more complex. They may yield accurate estimates if they are prospectively designed to include comprehensive and reliable measurement of the process by which participants selected into conditions, if they use large sample sizes, and if they carefully select control groups that are from the same location and with the same substantive characteristics. By contrast, we have little good reason to think that nonrandomized experiments using archival data without comprehensive selection measures are likely to yield accurate effect estimates.

[1]  William R. Shadish,et al.  On the Importance of Reliable Covariate Measurement in Selection Bias Adjustments Using Propensity Scores , 2011 .

[2]  D. Campbell,et al.  Regression-Discontinuity Analysis: An Alternative to the Ex-Post Facto Experiment , 1960 .

[3]  Peter M. Steiner,et al.  The importance of covariate selection in controlling for selection bias in observational studies. , 2010, Psychological methods.

[4]  W. Shadish,et al.  Does Alcoholics Anonymous work? The results from a meta-analysis of controlled experiments. , 1999, Substance use & misuse.

[5]  Peter M. Steiner,et al.  Can Nonrandomized Experiments Yield Accurate Answers? A Randomized Experiment Comparing Random and Nonrandom Assignments , 2008 .

[6]  T. Cook,et al.  Quasi-experimentation: Design & analysis issues for field settings , 1979 .

[7]  Random versus nonrandom assignment in controlled experiments: do you get the same answer? , 1996, Journal of consulting and clinical psychology.

[8]  D. Rubin ASSIGNMENT TO TREATMENT GROUP ON THE BASIS OF A COVARIATE , 1976 .

[9]  Vivian C. Wong,et al.  A randomized experiment comparing random and cutoff-based assignment. , 2011, Psychological methods.

[10]  W. Shadish,et al.  The effects of psychological therapies under clinically representative conditions: a meta-analysis. , 2000, Psychological bulletin.

[11]  Thomas D. Cook,et al.  Unbiased Causal Inference From an Observational Study: Results of a Within-Study Comparison , 2009 .

[12]  John Fox Multiple and Generalized Nonparametric Regression , 2000 .

[13]  Arthur S. Goldberger,et al.  Selection bias in evaluating treatment effects: Some formal illustrations , 2008 .

[14]  L. Delbeke Quasi-experimentation - design and analysis issues for field settings - cook,td, campbell,dt , 1980 .

[15]  W. Shadish,et al.  Assignment methods in experimentation: When do nonrandomized experiments approximate answers from randomized experiments? , 1996 .

[16]  W. Shadish,et al.  Experimental and Quasi-Experimental Designs for Generalized Causal Inference , 2001 .

[17]  Alan Y. Chiang,et al.  Generalized Additive Models: An Introduction With R , 2007, Technometrics.

[18]  W. Shadish,et al.  The renaissance of field experimentation in evaluating interventions. , 2009, Annual review of psychology.

[19]  M. Lipsey,et al.  The efficacy of psychological, educational, and behavioral treatment. Confirmation from meta-analysis. , 1993, American Psychologist.

[20]  M. L. Smith,et al.  Meta-analysis of psychotherapy outcome studies. , 1977, The American psychologist.

[21]  Vivian C. Wong,et al.  Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within‐study comparisons , 2008 .