A Practical Guide to Regression Discontinuity

Dissemination of MDRC publications is supported by the following funders that help finance MDRC's public policy outreach and expanding efforts to communicate the results and implications of our work to policymakers, practitioners, and others: The Annie E. The findings and conclusions in this paper do not necessarily represent the official positions or policies of the funders. For information about MDRC and copies of our publications, see our Web site: www.mdrc.org. Abstract Regression discontinuity (RD) analysis is a rigorous nonexperimental 1 approach that can be used to estimate program impacts in situations in which candidates are selected for treatment based on whether their value for a numeric rating exceeds a designated threshold or cut-point. Over the last two decades, the regression discontinuity approach has been used to evaluate the impact of a wide variety of social programs Yet, despite the growing popularity of the approach, there is only a limited amount of accessible information to guide researchers in the implementation of an RD design. While the approach is intuitively appealing, the statistical details regarding the implementation of an RD design are more complicated than they might first appear. Most of the guidance that currently exists appears in technical journals that require a high degree of technical sophistication to read. Furthermore, the terminology that is used is not well defined and is often used inconsistently. Finally, while a number of different approaches to the implementation of an RD design are proposed in the literature, they each differ slightly in their details. As such, even researchers with a fairly sophisticated statistical background can find it difficult to access practical guidance for the implementation of an RD design. To help fill this void, the present paper is intended to serve as a practitioners' guide to implementing RD designs. It seeks to explain things in easy-to-understand language and to offer best practices and general guidance to those attempting an RD analysis. In addition, the guide illustrates the various techniques available to researchers and explores their strengths and weaknesses using a simulated dataset. The guide provides a general overview of the RD approach and then covers the following topics in detail: (1) graphical presentation in RD analysis, (2) estimation (both parametric and nonparametric), (3) establishing the interval validity of RD impacts, (4) the precision of RD estimates, (5) the generalizability of RD findings, and (6) estimation and precision in the context of a fuzzy RD analysis. …

[1]  Thomas Lemieux,et al.  Incentive Effects of Social Assistance: A Regression Discontinuity Approach , 2004 .

[2]  H. Bloom,et al.  Using Covariates to Improve Precision for Studies That Randomize Schools to Evaluate Educational Interventions , 2007 .

[3]  H. Müller,et al.  Local Polynomial Modeling and Its Applications , 1998 .

[4]  Richard Blundell,et al.  Kernel Regression in Empirical Microeconomics , 1998 .

[5]  David S. Lee Randomized experiments from non-random selection in U.S. House elections , 2005 .

[6]  Vivian C. Wong,et al.  Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within‐study comparisons , 2008 .

[7]  H. Bloom Learning more from social experiments: evolving analytic approaches , 2006 .

[8]  David S. Lee,et al.  Economic Impacts of New Unionization on Private Sector Employers: 1984–2001 , 2004 .

[9]  Kristin E. Porter,et al.  Assessing the Generalizability of Estimates of Causal Effects from Regression Discontinuity Designs. , 2012 .

[10]  W. V. D. Klaauw,et al.  A regression-discontinuity evaluation of the effect of financial aid offers on college enrollment , 1997 .

[11]  W. V. D. Klaauw,et al.  Estimating the Effect of Financial Aid Offers on College Enrollment: A Regression-Discontinuity Approach , 2002 .

[12]  Jeffrey A. Smith,et al.  Standards for Regression Discontinuity Designs. , 2010 .

[13]  Petra E. Todd,et al.  Evaluating the Effect of an Antidiscrimination Law Using a Regression-Discontinuity Design , 1999 .

[14]  W. V. D. Klaauw,et al.  Regression-Discontinuity Analysis: A Survey of Recent Developments in Economics , 2008 .

[15]  Geoffrey H. Moore,et al.  The National Bureau of Economic Research , 1950 .

[16]  Peter Z. Schochet Technical Methods Report: Statistical Power for Regression Discontinuity Designs in Education Evaluations. NCEE 2008-4026. , 2008 .

[17]  Beth Boulay,et al.  Reading First Impact Study. Final Report. NCEE 2009-4038. , 2008 .

[18]  Justin McCrary,et al.  Manipulation of the Running Variable in the Regression Discontinuity Design: A Density Test , 2007 .

[19]  R. J. Boik Contrasts and Effect Sizes in Behavioral Research: A Correlational Approach , 2001 .

[20]  H. Bloom,et al.  Reading First Impact Study: Interim Report. NCEE 2008-4016. , 2008 .

[21]  Douglas L. Miller,et al.  Does Head Start Improve Children's Life Chances? Evidence from a Regression Discontinuity Design , 2005, SSRN Electronic Journal.

[22]  J. Angrist,et al.  Using Maimonides&Apos; Rule to Estimate the Effect of Class Size on Student Achievement , 1997 .

[23]  Thomas D. Cook,et al.  "Waiting for Life to Arrive": A history of the regression-discontinuity design in Psychology, Statistics and Economics , 2008 .

[24]  Mark W. Lipsey,et al.  Empirical Benchmarks for Interpreting Effect Sizes in Research , 2008 .

[25]  Arthur S. Goldberger,et al.  Selection bias in evaluating treatment effects: Some formal illustrations , 2008 .

[26]  W. Shadish,et al.  Experimental and Quasi-Experimental Designs for Generalized Causal Inference , 2001 .

[27]  Larry L. Orr,et al.  Social Experiments: Evaluating Public Programs With Experimental Methods , 1998 .

[28]  P. Gleason,et al.  Replicating Experimental Impact Estimates Using a Regression Discontinuity Approach. NCEE 2012-4025. , 2012 .

[29]  Erich Battistin,et al.  Ineligibles and eligible non-participants as a double comparison group in regression-discontinuity designs , 2008 .

[30]  Patrik Guggenberger,et al.  On the size distortion of tests after an overidentifying restrictions pretest , 2012 .

[31]  Jonathan Roughgarden,et al.  Rule of thumb , 1991, Behavioral and Brain Sciences.

[32]  Victor Lavy,et al.  Using Maimonides' Rule to Estimate the Effect of Class Size on Student Achievement , 1999 .

[33]  P. Lachenbruch Statistical Power Analysis for the Behavioral Sciences (2nd ed.) , 1989 .

[34]  Replicating Experimental Impact Estimates Using a Regression Discontinuity Approach. Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education , 2012 .

[35]  Jack Porter,et al.  Estimation in the Regression Discontinuity Model , 2003 .

[36]  Jacob Cohen Statistical Power Analysis for the Behavioral Sciences , 1969, The SAGE Encyclopedia of Research Design.

[37]  S.,et al.  MINIMUM DETECTABLE EFFECTS A Simple Way to Report the Statistical Power of Experimental Designs , 2006 .

[38]  H. Bloom Modern Regression Discontinuity Analysis , 2012 .

[39]  J. Hahn,et al.  IDENTIFICATION AND ESTIMATION OF TREATMENT EFFECTS WITH A REGRESSION-DISCONTINUITY DESIGN , 2001 .

[40]  David S. Lee,et al.  Regression Discontinuity Designs in Economics , 2009 .

[41]  Dan A. Black,et al.  Evaluating the Worker Profiling and Reemployment Services System Using a Regression Discontinuity Approach , 2007 .

[42]  Howard S. Bloom,et al.  Accounting for No-Shows in Experimental Evaluation Designs , 1984 .

[43]  Nigel O'Brian,et al.  Generalizability Theory I , 2003 .

[44]  Winston T. Lin,et al.  The Benefits and Costs of JTPA Title II-A Programs: Key Findings from the National Job Training Partnership Act Study , 1997 .

[45]  R. Rohh ALTERNATIVE METHODS FOR EVALUATING THE IMPACT OF INTERVENTIONS An Overview , 2001 .

[46]  D. Campbell,et al.  Regression-Discontinuity Analysis: An Alternative to the Ex-Post Facto Experiment , 1960 .

[47]  M. Lipsey,et al.  Performance Trajectories and Performance Gaps as Achievement Effect-Size Benchmarks for Educational Interventions , 2008 .

[48]  Joseph S. Shapiro,et al.  The Benefits of Delayed Primary School Enrollment , 2012, The Journal of Human Resources.