Unlocking the Potential of the “What Works” Approach to Policymaking and Practice

Despite bipartisan support in Washington, DC, which dates back to the mid-1990s, the “what works” approach has yet to gain broad support among policymakers and practitioners. One way to build such support is to increase the usefulness of program impact evaluations for these groups. We describe three ways to make impact evaluations more useful to policy and practice: emphasize learning from all studies over sorting out winners and losers; collect better information on the conditions that shape an intervention's success or failure; and learn about the features of programs and policies that influence their effectiveness. We argue that measurement of the treatment contrast that exists between the intervention and comparison condition(s) is important for each of these changes. Measurement and analysis of the treatment contrast will increase cost and policymakers and practitioners already see evaluations as expensive. Therefore we offer suggestions for reducing costs in other areas of data collection.

[1]  Rebecca A. Maynard,et al.  Presidential address: Evidence-based decision making: What will it take for the decision makers to care? , 2006 .

[2]  J. Anhøj,et al.  Feasibility of Collecting Diary Data From Asthma Patients Through Mobile Phones and SMS (Short Message Service): Response Rate Analysis and Focus Group Evaluation From a Pilot Study , 2004, Journal of medical Internet research.

[3]  Lewis M. Branscomb,et al.  Investing in innovation : creating a research and innovation policy that works , 1998 .

[4]  J. Gueron,et al.  Fighting for Reliable Evidence , 2013 .

[5]  M. Lipsey,et al.  The role of method in treatment effectiveness research: evidence from meta-analysis. , 2001, Psychological methods.

[6]  Mark W. Lipsey,et al.  Theory as method: Small theories of treatments , 1993 .

[7]  C. Ferguson,et al.  Publication bias in psychological science: prevalence, methods for identifying and controlling, and implications for the use of meta-analyses. , 2012, Psychological methods.

[8]  P. Easterbrook,et al.  Publication bias in clinical research , 1991, The Lancet.

[9]  G. Margolis,et al.  Show Me the Evidence: Obama's Fight for Rigor and Results in Social Policy , 2014 .

[10]  Emma Brunskill,et al.  Evaluating the accuracy of data collection on mobile phones: A study of forms, SMS, and voice , 2009, 2009 International Conference on Information and Communication Technologies and Development (ICTD).

[11]  Jacob Cohen Statistical Power Analysis for the Behavioral Sciences , 1969, The SAGE Encyclopedia of Research Design.

[12]  S. Raudenbush,et al.  Strategies for Improving Precision in Group-Randomized Experiments , 2007 .

[13]  David S. Cordray,et al.  A Procedure for Assessing Intervention Fidelity in Experiments Testing Educational and Behavioral Interventions , 2012, The Journal of Behavioral Health Services & Research.

[14]  農林水産奨励会農林水産政策情報センター ロジックモデル策定ガイド = Logic model development guide , 2003 .

[15]  Howard S. Bloom,et al.  A Conceptual Framework for Studying the Sources of Variation in Program Effects , 2013 .

[16]  J. Valentine,et al.  Outcome-Reporting Bias in Education Research , 2013 .

[17]  S. McNeeley,et al.  Replication in criminology: A necessary practice , 2015 .

[18]  S. Tanner Evidence of False Positives in Research Clearinghouses and Influential Journals: An Application of P-Curve to Policy Research , 2021 .

[19]  Kosuke Imai,et al.  Unpacking the Black Box of Causality: Learning about Causal Mechanisms from Experimental and Observational Studies , 2011, American Political Science Review.

[20]  ST Kew,et al.  Text messaging: an innovative method of data collection in medical research , 2010, BMC Research Notes.

[21]  Karen A. Blase,et al.  Implementation Research: A Synthesis of the , 2005 .

[22]  Mark Tomlinson,et al.  The use of mobile phones as a data collection tool: A report from a household survey in South Africa , 2009, BMC Medical Informatics Decis. Mak..

[23]  Aziz Sheikh,et al.  An embedded longitudinal multi-faceted qualitative evaluation of a complex cluster randomized controlled trial aiming to reduce clinically important errors in medicines management in general practice , 2012, Trials.

[24]  Daniele Fanelli,et al.  Negative results are disappearing from most disciplines and countries , 2011, Scientometrics.

[25]  J. Ioannidis,et al.  Reproducibility in Science: Improving the Standard for Basic and Preclinical Research , 2015, Circulation research.

[26]  Patricia Del Grosso,et al.  Making Replication Work: Building Infrastructure to Implement, Scale-up, and Sustain Evidence-Based Early Childhood Home Visiting Programs with Fidelity , 2014 .

[27]  C. Achilles,et al.  Evaluation: A Systematic Approach , 1980 .

[28]  G. J. Ebrahim,et al.  Publication bias in Meta-Analysis: prevention, assessment and adjustments Hannah R. Rothstein, Alexander J. Sutton, Michael Borenstein (eds) Chichester, John Wiley & Sons Ltd. 2005. ISBN 0–470–870–141 £55 , 2006 .

[29]  Mark W. Lipsey,et al.  The positive effects of cognitive–behavioral programs for offenders: A meta-analysis of factors associated with effective treatment , 2005 .

[30]  T. Boat,et al.  Preventing Mental, Emotional, and Behavioral Disorders Among Young People: Progress and Possibilities , 2009 .

[31]  R. Maynard,et al.  The Potential of Home Visitor Services to Strengthen Welfare-to-Work Programs for Teenage Parents on Cash Assistance , 2001 .

[32]  Rebecca Maynard,et al.  PowerUp!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-Experimental Design Studies , 2013 .

[33]  Mimi Engel,et al.  Replication and robustness in developmental research. , 2014, Developmental psychology.

[34]  David S. Cordray,et al.  Moving From the Lab to the Field: The Role of Fidelity and Achieved Relative Intervention Strength , 2009 .

[35]  J. Durlak,et al.  Implementation Matters: A Review of Research on the Influence of Implementation on Program Outcomes and the Factors Affecting Implementation , 2008, American journal of community psychology.

[36]  Larry L. Orr,et al.  Social Experiments: Evaluating Public Programs With Experimental Methods , 1998 .

[37]  L. Hedges,et al.  The Handbook of Research Synthesis and Meta-Analysis , 2009 .

[38]  Jonathan Jacobson,et al.  NATIONAL CENTER FOR EDUCATION STATISTICS , 1998 .