Substitution and Dropout Bias in Social Experiments: A Study of an Influential Social Experiment

This paper considers the interpretation of evidence from social experiments when persons randomized out of a program being evaluated have good substitutes for it, and when persons randomized into a program drop out to pursue better alternatives. Using data from an experimental evaluation of a classroom training program, we document the empirical importance of control group substitution and treatment group dropping out. Evidence that one program is ineffective relative to close substitutes is not evidence that the type of service provided by all of the programs is ineffective, although that is the way experimental evidence is often interpreted.

[1]  Petra E. Todd,et al.  Matching As An Econometric Evaluation Estimator , 1998 .

[2]  V. Joseph Hotz,et al.  Bounding Causal Effects Using Data from a Contaminated Natural Experiment: Analysing the Effects of Teenage Childbearing , 1997 .

[3]  J. Heckman,et al.  Experimental and Nonexperimental Evaluation , 1996 .

[4]  Edward Pauly,et al.  From Welfare to Work. , 1993 .

[5]  Howard S. Bloom,et al.  The National JTPA Study: Title II-A Impacts on Earnings and Employment at 18 Months. Executive Summary. , 1992 .

[6]  D. Horvitz,et al.  A Generalization of Sampling Without Replacement from a Finite Universe , 1952 .

[7]  Jeffrey A. Smith,et al.  The Pre-Program Earnings Dip and the Determinants of Participation in a Social Program: Implications for Simple Program Evaluation Strategies , 1999 .

[8]  Petra E. Todd,et al.  Matching As An Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Programme , 1997 .

[9]  Gary Burtless,et al.  The Case for Randomized Field Trials in Economic and Policy Research , 1995 .

[10]  F. Doolittle,et al.  Implementing the National JTPA Study. , 1990 .

[11]  J. Heckman Sample selection bias as a specification error , 1979 .

[12]  Joel L. Horowitz,et al.  Identification and Robustness with Contaminated and Corrupted Data , 1995 .

[13]  Kenneth A. Couch,et al.  New Evidence on the Long-Term Effects of Employment Training Programs , 1992, Journal of Labor Economics.

[14]  James J. Heckman,et al.  Assessing the Case for Social Experiments , 1995 .

[15]  J. Kemple Florida's Project Independence. Benefits, Costs, and Two-Year Impacts of Florida's JOBS Program. , 1995 .

[16]  I. Piliavin,et al.  The Impact of Supported Work on Young School Dropouts@@@The Impact of Supported Work on Ex-Offenders@@@The Impact of Supported Work on Long-Term Recipients of AFDC Benefits@@@The Impact of Supported Work on Ex-Addicts@@@The Supported Work Evaluation: Final Benefit Cost Analysis , 1982 .

[17]  James J. Heckman,et al.  Alternative methods for solving the problem of selection bias in evaluating the impact of treatments , 1986 .

[18]  Jeffrey A. Smith,et al.  Evaluating the Welfare State , 1998 .

[19]  Peter Kemper,et al.  The National Supported Work Demonstration , 1985 .

[20]  J. Heckman Instrumental Variables: A Study of Implicit Behavioral Assumptions Used in Making Program Evaluations. , 1997 .

[21]  Winston T. Lin,et al.  The Benefits and Costs of JTPA Title II-A Programs: Key Findings from the National Job Training Partnership Act Study , 1997 .

[22]  Edgar K. Browning On the Marginal Welfare Cost of Taxation , 1987 .

[23]  J. Heckman,et al.  The Economics and Econometrics of Active Labor Market Programs , 1999 .

[24]  G. Hamilton,et al.  The Saturation Work Initiative Model in San Diego: A Five-Year Follow-up Study. , 1993 .

[25]  J. Heckman,et al.  The Pre‐programme Earnings Dip and the Determinants of Participation in a Social Programme. Implications for Simple Programme Evaluation Strategies , 1999 .

[26]  C. Manski Nonparametric Bounds on Treatment Effects , 1989 .

[27]  James J. Heckman,et al.  Characterizing Selection Bias Using Experimental Data , 1998 .

[28]  D. Rubin,et al.  The central role of the propensity score in observational studies for causal effects , 1983 .

[29]  J. Heckman,et al.  Longitudinal Analysis of Labor Market Data: Alternative methods for evaluating the impact of interventions , 1985 .

[30]  J. Heckman,et al.  Making the Most out of Programme Evaluations and Social Experiments: Accounting for Heterogeneity in Programme Impacts , 1997 .