Assessing the Case for Social Experiments

This paper analyzes the method of social experiments. The assumptions that justify the experimental method are exposited. Parameters of interest in evaluating social programs are discussed. The authors show how experiments sometimes serve as instrumental variables to identify program impacts. The most favorable case for experiments ignores variability across persons in response to treatments received and assumes that mean impacts of a program are the main object of interest in conducting an evaluation. Experiments do not identify the distribution of program gains unless additional assumptions are maintained. Evidence on the validity of the assumptions used to justify social experiments is presented.

[1]  Gary Burtless,et al.  Are Classical Experiments Needed for Manpower Policy , 1986 .

[2]  Orley Ashenfelter,et al.  Using the Longitudinal Structure of Earnings to Estimate the Effect of Training Programs , 1984 .

[3]  M S Kramer,et al.  Scientific challenges in the application of randomized trials. , 1984, JAMA.

[4]  Laurence J. Kotlikoff,et al.  Dynamic Fiscal Policy , 1988 .

[5]  Jerry A. Hausman,et al.  Technical Problems in Social Experimentation: Cost Versus Ease of Analysis , 1983 .

[6]  Rebecca A. Maynard,et al.  The Adequacy of Comparison Group Designs for Evaluations of Employment-Related Programs , 1987 .

[7]  George Cave,et al.  Career Beginnings Impact Evaluation: Findings from a Program for Disadvantaged High School Students. , 1990 .

[8]  Jan Tinbergen,et al.  Economic Policy: Principles and Design. , 1959 .

[9]  Seth Sanders,et al.  Bounding treatment effects in controlled and natural experiments subject to post-randomization treatment choice. , 1994 .

[10]  M. Fréchet Sur les tableaux de correlation dont les marges sont donnees , 1951 .

[11]  Charles F. Manski,et al.  Evaluating Welfare and Training Programs. , 1994 .

[12]  J. Heckman,et al.  Longitudinal Analysis of Labor Market Data: Alternative methods for evaluating the impact of interventions , 1985 .

[13]  D. Andrews Asymptotic Normality of Series Estimators for Nonparametric and Semiparametric Regression Models , 1991 .

[14]  James J. Heckman,et al.  Longitudinal Analysis of Labor Market Data , 1985 .

[15]  Jeffrey A. Smith,et al.  Making the Most Out of Social Experiments: Reducing the Intrinsic Uncertainty in Evidence from Randomized Trials with an Application to the Jtpa Exp , 1994 .

[16]  James J. Heckman,et al.  Choosing Among Alternative Nonexperimental Methods for Estimating the Impact of Social Programs: the Case of Manpower Training , 1989 .

[17]  Edward C. Prescott,et al.  The Econometrics of the General Equilibrium Approach to Business Cycles , 1991 .

[18]  F. Doolittle,et al.  Implementing the National JTPA Study. , 1990 .

[19]  Glen G. Cain,et al.  A Reanalyis of Marital Stability in the Seattle-Denver Income-Maintenance Experiment , 1990, American Journal of Sociology.

[20]  Jeffrey A. Smith,et al.  Accounting for Dropouts in Evaluations of Social Experiments , 1994 .

[21]  James J. Heckman,et al.  Ashenfelter's Dip and the Determinants of Participation in a Social Program: Implications for Simple Program Evaluation Strategies , 1995 .

[22]  Peter E. Rossi,et al.  Evaluating the methodology of social experiments , 1986 .

[23]  O. Ashenfelter,et al.  Estimating the Effect of Training Programs on Earnings , 1976 .

[24]  Howard S. Bloom,et al.  The National JTPA Study: Title II-A Impacts on Earnings and Employment at 18 Months. Executive Summary. , 1992 .

[25]  Michael T. Hannan,et al.  A Reassessment of the Effect of Income Maintenance on Marital Dissolution in the Seattle-Denver Experiment , 1990, American Journal of Sociology.

[26]  H. James VARIETIES OF SELECTION BIAS , 1990 .

[27]  R. Lalonde Evaluating the Econometric Evaluations of Training Programs with Experimental Data , 1984 .