Learning about Treatment Effects from Experiments with Random Assignment of Treatments

The importance of social programs to a diverse population creates a legitimate concern that the findings of evaluations be widely credible. The weaker the assumptions imposed, the more widely credible are the findings. The classical argument for random assignment of treatments is viewed by many as enabling evaluation under weak assumptions, and it has generated much interest in the conduct of experiments. But the classical argument does impose assumptions, and there often is good reason to doubt their realism. The methodological research described in this article explores the inferences that may be drawn from experimental data under assumptions weak enough to yield widely credible findings. This literature has two branches. One seeks out notions of treatment effect that are identified when the experimental data are combined with weak assumptions. The canonical finding is that the average treatment effect within some context-specific subpopulation is identified. The other branch specifies a population of a priori interest and seeks to learn about treatment effects in this population. Here the canonical finding is a bound on average treatment effects. The various approaches to the analysis of experiments are complementary from a mathematical perspective, but in tension as guides to evaluation practice. The reader of an evaluation reporting that some social program "works" or has a "positive impact" should be careful to ascertain what treatment effect has been estimated and under what assumptions.

[1]  Stephen A. Woodbury,et al.  Bonuses to Workers and Employers to Reduce Unemployment: Randomized Trials in Illinois , 1987 .

[2]  James J. Heckman,et al.  Randomization and Social Policy Evaluation , 1991 .

[3]  D. Campbell,et al.  EXPERIMENTAL AND QUASI-EXPERIMENT Al DESIGNS FOR RESEARCH , 2012 .

[4]  Charles F. Manski,et al.  The Selection Problem , 1990 .

[5]  J. Heckman,et al.  Longitudinal Analysis of Labor Market Data: Alternative methods for evaluating the impact of interventions , 1985 .

[6]  Seth Sanders,et al.  Bounding treatment effects in controlled and natural experiments subject to post-randomization treatment choice. , 1994 .

[7]  R. Boruch,et al.  Evaluating AIDS Prevention Programs , 1991 .

[8]  C. Manski Nonparametric Bounds on Treatment Effects , 1989 .

[9]  P. W. Bowman,et al.  PHS Public Health Service , 1963 .

[10]  F. Stafford Income-Maintenance Policy and Work Effort: Learning from Experiments and Labor-Market Studies , 1985 .

[11]  C. Manski Anatomy of the Selection Problem , 1989 .

[12]  G. Cain The issues of marital stability and family composition and the income maintenance experiments , 1986 .

[13]  James J. Heckman,et al.  Longitudinal Analysis of Labor Market Data , 1985 .

[14]  Ann S. Epstein,et al.  Effects of the Perry Preschool Program on Youths Through Age 19 , 1984 .

[15]  Charles F. Manski,et al.  Evaluating Welfare and Training Programs. , 1994 .

[16]  M. Sobel,et al.  Identification Problems in the Social Sciences. , 1996 .

[17]  Jerry A. Hausman,et al.  Attrition Bias in Experimental and Panel Data: The Gary Income Maintenance Experiment , 1979 .

[18]  Robert A. Moffitt,et al.  The Estimation of Wage Gains and Welfare Gains in Self-selection , 1987 .

[19]  Jeffrey A. Dubin,et al.  Experimental estimates of the impact of wage subsidies , 1993 .

[20]  Joshua D. Angrist,et al.  Lifetime Earnings and the Vietnam Era Draft Lottery: Evidence from Social Security Administrative Records , 1990 .

[21]  Jeffrey A. Smith,et al.  Making the Most Out of Social Experiments: Reducing the Intrinsic Uncertainty in Evidence from Randomized Trials with an Application to the Jtpa Exp , 1994 .

[22]  Charles F. Manski,et al.  The Mixing Problem in Program Evaluation , 1997 .

[23]  Edward Pauly,et al.  From Welfare to Work. , 1993 .

[24]  C. Holden Head Start Enters Adulthood: After 25 years we don't know much about how early childhood intervention programs work, but current research suggests they should be extended beyond early childhood. , 1990, Science.

[25]  J. Angrist,et al.  Identification and Estimation of Local Average Treatment Effects , 1995 .

[26]  P. Schmidt,et al.  Limited-Dependent and Qualitative Variables in Econometrics. , 1984 .

[27]  Jeffrey E. Harris,et al.  Macroexperiments versus Microexperiments for Health Policy , 1985 .

[28]  Peter Passell Like a New Drug, Social Programs are Put to the Test , 1993 .

[29]  Howard S. Bloom,et al.  Accounting for No-Shows in Experimental Evaluation Designs , 1984 .

[30]  R. A. Fisher,et al.  Design of Experiments , 1936 .

[31]  Bruce D. Meyer Lessons from the U.S. Unemployment Insurance Experiments , 1995 .