Defining Program Effects: A Distribution-Based Perspective

ABSTRACT In an age of accountability, it is critical to define and estimate the effects of teacher education and professional development programs on student learning in ways that allow stakeholders to explore potential reasons for what is observed and to enhance program quality and fidelity. Across the suite of statistical models used for program evaluation, researchers consistently measure program effectiveness using the coefficients of fixed program effects. We propose that program effects are best characterized not as a single effect to be estimated, but as a distribution of teacher-specific effects. In this article, we first discuss this approach and then describe one way it could be used to define and estimate program effects within a value-added modeling context. Using an example dataset, we demonstrate how program effect estimates can be obtained using the proposed methodology and explain how distributions of these estimates provide additional information and insights about programs that are not apparent when only looking at average effects. By examining distributions of teacher-specific effects as proposed, researchers have the opportunity to more deeply investigate and understand the effects of programs on student success.

[1]  SAS EVAAS for K-12 Statistical Models , 2015 .

[2]  Stephanie Cronen,et al.  Experimenting With Teacher Professional Development: Motives and Methods , 2008 .

[3]  T. Sass,et al.  Teacher training, teacher quality and student achievement , 2011 .

[4]  Sharon L. Lohr,et al.  Multidimensional Assessment of Value Added by Teachers to Real-World Outcomes , 2012 .

[5]  Daniel F. McCaffrey,et al.  Controlling for Individual Heterogeneity in Longitudinal Models, with Applications to Student Achievement , 2007, 0706.1401.

[6]  S. Glazerman,et al.  Impacts of Comprehensive Teacher Induction: Final Results from a Randomized Controlled Study. NCEE 2010-4027. , 2010 .

[7]  Stephen L. Wasby,et al.  On Preparing Teachers , 1971, PS: Political Science & Politics.

[8]  Paul Wright,et al.  Controlling for Student Background in Value-Added Assessment of Teachers , 2004 .

[9]  S. Lohr Red Beads and Profound Knowledge: Deming and Quality of Education , 2015 .

[10]  L. Hedges,et al.  Conditional Optimal Design in Three- and Four-Level Experiments , 2014 .

[11]  R. Allen,et al.  The effect of changes in published secondary school admissions on pupil composition , 2012 .

[12]  T. Louis Estimating a population of parameter values using Bayes and empirical Bayes methods , 1984 .

[13]  Robin Thompson,et al.  [That BLUP is a Good Thing: The Estimation of Random Effects]: Comment , 1991 .

[14]  Daniel F. McCaffrey,et al.  Bayesian Methods for Scalable Multivariate Value-Added Assessment , 2007 .

[15]  Luke Miratrix,et al.  Differential effects of three professional development models on teacher knowledge and student achievement in elementary science , 2012 .

[16]  G. Robinson That BLUP is a Good Thing: The Estimation of Random Effects , 1991 .

[17]  Savitha Moorthy,et al.  Preparing Teachers to Design Sequences of Instruction in Earth Systems Science , 2011 .

[18]  E. Toma,et al.  Does Teacher Professional Development Improve Math and Science Outcomes and Is It Cost Effective? , 2013 .

[19]  AERA Statement on Use of Value-Added Models (VAM) for the Evaluation of Educators and Educator Preparation Programs , 2015 .

[20]  Jeffrey S. Morris The BLUPs are not "best" when it comes to bootstrapping , 2002 .

[21]  Jonathan A. Supovitz,et al.  The Effects of Professional Development on Science Teaching Practices and Classroom Culture , 2000 .

[22]  J. S. Butler,et al.  Do Less Effective Teachers Choose Professional Development Does It Matter? , 2012, Evaluation review.

[23]  A. Bryk,et al.  Assessing the Value-Added Effects of Literacy Collaborative Professional Development on Student Learning , 2010, The Elementary School Journal.

[24]  J. R. Lockwood,et al.  A Model for Teacher Effects From Longitudinal Data Without Assuming Vertical Scaling , 2010 .

[25]  Carla C. Johnson,et al.  A Study of the Impact of Transformative Professional Development on Hispanic Student Performance on State Mandated Assessments of Science in Elementary School , 2014 .

[26]  Thomas A Louis,et al.  Jump down to Document , 2022 .

[27]  Intermediate Trends in Math and Science Partnership-Related Changes in Student Achievement with Management Information System Data. , 2009 .

[28]  Geoffrey D. Borman,et al.  A Multistate District-Level Cluster Randomized Trial of the Impact of Data-Driven Reform on Reading and Mathematics Achievement , 2011 .

[29]  Daniel F. McCaffrey,et al.  Evaluating Value-Added Models for Teacher Accountability , 2004 .

[30]  Chauncey Monte-Sano,et al.  Evaluating American History Teachers' Professional Development: Effects on Student Learning , 2011 .

[31]  Christine E. Sleeter,et al.  Toward Teacher Education Research That Informs Policy , 2014 .

[32]  Stefanie L. Marshall,et al.  The Emergence of High-Stakes Accountability Policies in Teacher Preparation: An Examination of the U.S. Department of Education's Proposed Regulations. , 2016 .

[33]  W. Stroup Generalized Linear Mixed Models: Modern Concepts, Methods and Applications , 2012 .

[34]  Education Professional Standards Board Evaluating Value-Added models for Teacher Accountability , 2004 .

[35]  Dan Goldhaber,et al.  The Gateway to the Profession: Assessing Teacher Preparation Programs Based on Student Achievement. CEDR Working Paper No. 2011-2.0. , 2013 .

[36]  L. Kyriakides,et al.  The impact of a dynamic approach to professional development on teacher instruction and student learning: results from an experimental study , 2011 .

[37]  Kimberlee C. Everson,et al.  Value-Added Modeling and Educational Accountability , 2017 .

[38]  R. Tate A Cautionary Note on Shrinkage Estimates of School and Teacher Effects , 2004 .

[39]  T. Louis,et al.  Triple‐goal estimates in two‐stage hierarchical models , 1998 .