A central question in evaluation research is ‘what works and why?’ It suggests a need not only to estimate the average effect of a programme or intervention but also to estimate and understand variability around this average. Statistical models are good at estimating overall effects and, when extended into a multilevel framework, enable the variability of effects to be estimated. If, however, we aim for an understanding of just why this variability arises then data generated from qualitative approaches such as case studies are valuable. This paper draws on our experiences of working on the national evaluation of the Children’s Fund—a UK government programme aimed at children aged 5–13 and their families—to set out how qualitative data can enhance the quantitative data used in statistical models and so lead to a better understanding of the strengths of a programme that is certainly heterogeneous in terms of its delivery and is likely to be heterogeneous in terms of its outcomes.
[1]
A. N. Leont’ev,et al.
Activity, consciousness, and personality
,
1978
.
[2]
Ian Plewis,et al.
Analysing Change—Measurement and Explanation Using Longitudinal Data
,
1986
.
[3]
J. Connell,et al.
How do urban communities affect youth? Using social science research to inform the design and evaluation of comprehensive community initiatives
,
1995
.
[4]
James P. Connell and Anne C. Kubisch.
Applying a Theory of Change Approach to the Evaluation of Comprehensive Community Initiatives: Progress, Prospects, and Problems
,
1998
.
[5]
K. Connell,et al.
New approaches to evaluating community initiatives: Theory, measurement, and analysis
,
1998
.
[6]
Ian Plewis.
Modelling impact heterogeneity
,
2002
.