Assumptions of Value-Added Models for Estimating School Effects

The ability of school (or teacher) value-added models to provide unbiased estimates of school (or teacher) effects rests on a set of assumptions. In this article, we identify six assumptions that are required so that the estimands of such models are well defined and the models are able to recover the desired parameters from observable data. These assumptions are (1) manipulability, (2) no interference between units, (3) interval scale metric, (4) homogeneity of effects, (5) strongly ignorable assignment, and (6) functional form. We discuss the plausibility of these assumptions and the consequences of their violation. In particular, because the consequences of violations of the last three assumptions have not been assessed in prior literature, we conduct a set of simulation analyses to investigate the extent to which plausible violations of them alter inferences from value-added models. We find that modest violations of these assumptions degrade the quality of value-added estimates but that models that explicitly account for heterogeneity of school effects are less affected by violations of the other assumptions.

[1]  Dale Ballou,et al.  Test Scaling and Value-Added Measurement , 2009, Education Finance and Policy.

[2]  D. Ballou Test Scaling and Value-Added Measurement. Working Paper 2008-23. , 2008 .

[3]  Jesse Rothstein,et al.  Do Value-Added Models Add Value? Tracking, Fixed Effects, and Causal Inference , 2007 .

[4]  Daniel F. McCaffrey,et al.  Bayesian Methods for Scalable Multivariate Value-Added Assessment , 2007 .

[5]  Cory Koedel,et al.  Re-Examining the Role of Teacher Quality in the Educational Production Function. Working Paper 2007-03. , 2007 .

[6]  L. Hedges,et al.  Intraclass Correlation Values for Planning Group-Randomized Trials in Education , 2007 .

[7]  S. Raudenbush,et al.  Evaluating Kindergarten Retention Policy , 2006 .

[8]  Kevin Lang,et al.  Does School Integration Generate Peer Effects? Evidence from Boston's Metco Program , 2004, SSRN Electronic Journal.

[9]  Michael A. Boozer,et al.  Inside the 'Black Box' of Project Star: Estimation of Peer Effects Using Experimental Data , 2001 .

[10]  David E. Booth,et al.  Analysis of Incomplete Multivariate Data , 2000, Technometrics.

[11]  Petra E. Todd,et al.  Matching As An Econometric Evaluation Estimator , 1998 .

[12]  Petra E. Todd,et al.  Matching As An Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Programme , 1997 .

[13]  S. Raudenbush,et al.  Hierarchical Linear Models: Applications and Data Analysis Methods , 1992 .

[14]  Rebecca Barr,et al.  How Schools Work , 1991 .

[15]  T. Speed,et al.  On the Application of Probability Theory to Agricultural Experiments. Essay on Principles. Section 9 , 1990 .

[16]  Rebecca Barr,et al.  Classroom Composition and the Design of Instruction. , 1988 .

[17]  D. Rubin Comment: Which Ifs Have Causal Answers , 1986 .

[18]  D. Rubin Statistics and Causal Inference: Comment: Which Ifs Have Causal Answers , 1986 .

[19]  P. Holland Statistics and Causal Inference , 1985 .

[20]  D. Rubin,et al.  The central role of the propensity score in observational studies for causal effects , 1983 .

[21]  D. Rubin Matched Sampling for Causal Effects: The Use of Matched Sampling and Regression Adjustment to Remove Bias in Observational Studies , 1973 .

[22]  J. I The Design of Experiments , 1936, Nature.

[23]  Missing Data , 2020, SAGE Research Methods Foundations.

[24]  Russell V. Lenth,et al.  Statistical Analysis With Missing Data (2nd ed.) (Book) , 2004 .

[25]  J. Heckman Sample selection bias as a specification error , 1979 .

[26]  Donald B. Rubin,et al.  Bayesian Inference for Causal Effects: The Role of Randomization , 1978 .

[27]  Michail Prodan,et al.  CHAPTER 17 – THE PLANNING OF EXPERIMENTS , 1968 .

[28]  David R. Cox Planning of Experiments , 1958 .