A Contribution to Current Debates in Impact Evaluation

A debate on approaches to impact evaluation has raged in development circles in recent years. This paper makes a contribution to this debate through discussion of four issues. First, I point out that there are two definitions of impact evaluation. Neither is right or wrong, but they refer to completely different things. There is no point in methodological debates unless they agree a common starting point. Second, I argue that there is confusion between counterfactuals, which are implied by the definition of impact evaluation adopted in this paper, and control groups, which are not always necessary to construct a counterfactual. Third, I address contribution rather than attribution — a distinction that is also definitional, mistaking claims of attribution to mean sole attribution. I then consider accusations of being ‘positivist’ and ‘linear’, which are, respectively, correct and unclear. Finally, I suggest that these arguments do not mean that there is a hierarchy of methods, rather quantitative approaches, including RCTs, are often the most appropriate methods for evaluating the impact of a large range of interventions, having the added advantage of allowing analysis of cost effectiveness or cost-benefit analysis.

[1]  H. White,et al.  Maintaining Momentum to 2015?: An Impact Evaluation of Interventions to Improve Maternal and Child Health and Nutrition in Bangladesh , 2005 .

[2]  Kevin Wheldall,et al.  When will we ever learn? , 2005 .

[3]  John Mayne,et al.  Addressing Attribution through Contribution Analysis: Using Performance Measures Sensibly , 2001, Canadian Journal of Program Evaluation.

[4]  David B. Wilson,et al.  Effects of Correctional Boot Camps on Offending: A Systematic Review , 2005 .

[5]  Ian Ayres,et al.  Super Crunchers: Why Thinking-By-Numbers Is the New Way to Be Smart , 2007 .

[6]  Howard White,et al.  Challenges in evaluating development effectiveness , 2005 .

[7]  Craig W. Thomas,et al.  Evidence for development effectiveness , 2009 .

[8]  H. White Of Probits and Participation: The Use of Mixed Methods in Quantitative Impact Evaluation , 2009 .

[9]  Pat R I C I,et al.  Rogers : Using Programme Theory to Evaluate Complicated 29 Using Programme Theory to Evaluate Complicated and Complex Aspects of Interventions , 2007 .

[10]  A. Deaton Instruments of Development: Randomization in the Tropics, and the Search for the Elusive Keys to Economic Development , 2009 .

[11]  C. N. Page Will we ever learn? , 1972, Nursing times.

[12]  John Alastair Dudgeon,et al.  Immunization: Principles and practice , 1998 .

[13]  M. Ravallion Evaluating Anti-Poverty Programs , 2005 .

[14]  H. White,et al.  Assessing interventions to improve child nutrition: a theory-based impact evaluation of the Bangladesh Integrated Nutrition Project† , 2007 .

[15]  David B. Wilson,et al.  Effects of Correctional Boot Camps on Offending , 2001 .

[16]  Charles C. Ragin,et al.  Fuzzy-Set Social Science , 2001 .

[17]  M. Scriven A Summative Evaluation of RCT Methodology: & An Alternative Approach to Causal Research , 2008, Journal of MultiDisciplinary Evaluation.

[18]  A. Petrosino,et al.  "Scared Straight" and other juvenile awareness programs for preventing juvenile delinquency. , 2002, The Cochrane database of systematic reviews.

[19]  Madhur Gautam,et al.  Agricultural extension : the Kenya experience - an impact evaluation , 2000 .

[20]  Espen Villanger,et al.  Assessing aid impact: a review of Norwegian evaluation practice , 2009 .

[21]  Ned Hall,et al.  Causation and counterfactuals , 2004 .

[22]  Franz von Kutschera,et al.  Causation , 1993, J. Philos. Log..

[23]  Howard White,et al.  Theory-based impact evaluation: principles and practice , 2009 .