Do We Need Experimental Data To Evaluate the Impact of Manpower Training On Earnings?

This article assesses several recent studies in the manpower training evaluation literature claiming that (1) nonexperimental methods of program evaluation produce unreliable estimates of program impacts and (2) randomized experiments are necessary to produce reliable ones. We present a more optimistic statement about the value of nonexperimental methods in analyzing the effects of training programs on earnings. Previous empirical demonstrations of the sensitivity of estimates of program impact to alternative non experimental procedures either do not test the validity of the testable assumptions that justify the nonexperimental procedures or else disregard the inference from such tests. We reanalyze data from the National Supported Work Demonstration experiment (NSW) utilized by LaLonde and Fraker and Maynard and reexamine the performance of nonexperimental estimates of the net impact of the NSW program on the posttraining earnings of young high school dropouts and adult women. Using several simple strategies for testing the appropriateness of alternative formulations of such estimators, we show that a number of the nonexperimental estimators used in these studies can be rejected. Although we eliminate a number of nonexperimental estimators by such tests, we are able to find estimators that are not rejected by these tests. Estimators not rejected by such tests yield net impact estimates that lead to the same inference about the impact of the program as the experimental estimates. The empirical results from our limited study provide tangible evidence that the recent denunciation of nonexperimental methods forevaluating manpower training effects is premature.

[1]  William R. Shadish,et al.  Evaluation studies : review annual , 1976 .

[2]  Stephen Pudney,et al.  Estimating Latent Variable Systems When Specification is Uncertain: Generalized Component Analysis and the Eliminant Method , 1982 .

[3]  James J. Heckman,et al.  Longitudinal Analysis of Labor Market Data , 1985 .

[4]  L. Lillard,et al.  Components of variation in panel earnings data: American scientists 1960-70 , 1979 .

[5]  Albert Madansky,et al.  Instrumental variables in factor analysis , 1964 .

[6]  J. Spengler,et al.  Systematic Thinking for Social Action. , 1973 .

[7]  John C. Hause,et al.  The Covariance Structure of Earnings and the on the Job Training Hypothesis , 1973 .

[8]  Orley Ashenfelter,et al.  Using the Longitudinal Structure of Earnings to Estimate the Effect of Training Programs , 1984 .

[9]  J. Heckman,et al.  Longitudinal Analysis of Labor Market Data: Alternative methods for evaluating the impact of interventions , 1985 .

[10]  O. Ashenfelter,et al.  Estimating the Effect of Training Programs on Earnings , 1976 .

[11]  Burt S. Barnow,et al.  Issues in the Analysis of Selectivity Bias. Discussion Papers. Revised. , 1980 .

[12]  James J. Heckman,et al.  Choosing Among Alternative Nonexperimental Methods for Estimating the Impact of Social Programs: the Case of Manpower Training , 1989 .

[13]  James J. Heckman,et al.  Alternative methods for evaluating the impact of interventions: An overview , 1985 .

[14]  J. Heckman Sample selection bias as a specification error , 1979 .

[15]  Laurie J. Bassi THE EFFECT OF CETA ON THE POSTPROGRAM EARNINGS OF PARTICIPANTS , 1983 .

[16]  N. Kiefer Federally subsidized occupational training and the employment and earnings of male trainees , 1978 .

[17]  R. Lalonde Evaluating the Econometric Evaluations of Training Programs with Experimental Data , 1984 .

[18]  Robert L,et al.  How Precise Are Evaluations of Employment and Training Programs , 1987 .

[19]  Laurie J. Bassi Estimating the Effect of Training Programs with Nonrandom Selection. Final Report, July-November 1980. , 1980 .

[20]  Rebecca A. Maynard,et al.  The Adequacy of Comparison Group Designs for Evaluations of Employment-Related Programs , 1987 .

[21]  J. Heckman Dummy Endogenous Variables in a Simultaneous Equation System , 1977 .