An Evaluation Framework for Health Promotion: Theory, Quality and Effectiveness

There is increasing demand for evaluation work funded by public agencies to become more focused on demonstrating effectiveness. Focusing evaluation on outcomes and effectiveness meets the information needs of strategic planners and policy makers, but other stakeholders involved in managing, delivering or using public services and programmes may use other assessment criteria, such as improving the quality of programmes or programme design. The necessity and value of these other criteria are in danger of being obscured. Acknowledging the legitimacy of the range of stakeholder perspectives, this article presents a framework for evaluation that has been developed over a number of years within the context of evaluating health promotion programmes as part of the work of a national health promotion agency. It argues for an approach to evaluation which recognizes the contributions of theory and quality as well as effectiveness in programme development. The Health Education Board for Scotland (HEBS) framework for evaluation – and the analysis that informed it – demonstrates that there are many stages and forms of evaluation which contribute to the development of effective interventions. While outcome evaluations and effectiveness reviews tend to be the prized evaluation products for those concerned with policy and strategic planning, these forms of evaluation are just ‘the tip of the iceberg’ of what is required to build a sound evidence base, bringing together the full range of evaluation needs from the perspectives of all the different stakeholder groups.

[1]  D. Nutbeam Health outcomes and health promotion - defining success in health promotion , 1996 .

[2]  S. Tilford Evidence-based health promotion. , 2000, Health education research.

[3]  G. Macdonald,et al.  Quality, Evidence and Effectiveness in Health Promotion: Striving for certainties , 1998 .

[4]  L. Green,et al.  Health Promotion Planning: An Educational and Environmental Approach , 1991 .

[5]  Evaluating health promotion is complex , 1998, BMJ.

[6]  L. Donaldson Setting the National Agenda-Saving Lives: Our Healthier Nation , 1999 .

[7]  L. Yen,et al.  Health action zones , 1998, IHRIM : the journal of the Institute of Health Record Information and Management.

[8]  R. Sanson-Fisher,et al.  Health research in Australia: its role in achieving the goals and targets , 1994 .

[9]  A. Oakley Experimentation in social science: the case of health promotion , 2000 .

[10]  Carol Hirschon Weiss,et al.  The Interface between Evaluation and Public Policy , 1999 .

[11]  K. Tones,et al.  Evidence for success in health promotion: suggestions for improvement. , 1996, Health education research.

[12]  E. Wimbush Strengthening research capacity in health promotion practice settings , 1999 .

[13]  C. Jordens,et al.  Multiplying health gains: the critical role of capacity-building within health promotion programs. , 1997, Health policy.

[14]  D. Nutbeam Oakley's case for using randomised controlled trials is misleading , 1999, BMJ.

[15]  Mike Hughes,et al.  Reconciling Process and Outcome in Evaluating Community Initiatives , 2000 .

[16]  David Evans,et al.  Assuring Quality in Health Promotion: How to Develop Standards of Good Practice , 1994 .

[17]  D. Nutbeam The challenge to provide ‘evidence’ in health promotion , 1999 .

[18]  S. Lusk Health promotion planning: An educational and environmental approach: Lawrence W. Green and Marshall W. Kreuter Mayfield Publishing, Mountain View, California, 2nd edn. , 1992 .

[19]  N. Black Why we need observational studies to evaluate the effectiveness of health care , 1996, BMJ.

[20]  A Oakley,et al.  Experimentation and social interventions: a forgotten but important history , 1998, BMJ.

[21]  L Potvin Methodological challenges in evaluation of dissemination programs. , 1996, Canadian journal of public health = Revue canadienne de sante publique.

[22]  Don Nutbeam,et al.  Evaluating Health Promotion , 1999, BMJ.

[23]  K. Tones Beyond the randomized controlled trial: a case for 'judicial review'. , 1997, Health education research.

[24]  S. Oliver,et al.  Effectiveness Reviews in Health Promotion , 1999 .

[25]  D. Wight Does sex education make a difference , 1997 .

[26]  G. Collings,et al.  Health Action Zones. , 1998, Practising Midwife.

[27]  Wolfgang Beywl,et al.  Renomo—a Design Tool for Evaluations , 1998 .

[28]  Elizabeth Perkins,et al.  Evidence‐Based Health Promotion , 1999 .

[29]  G. Macdonald,et al.  The Evidence of Health Promotion Effectiveness. Shaping Public Health in a New Europe , 2000 .

[30]  J. Connell,et al.  New Approaches to Evaluating Community Initiatives. Concepts, Methods, and Contexts. Roundtable on Comperhensive Community Initiatives for Children and Families. , 1995 .

[31]  Huey-tsyh Chen Theory-driven evaluations , 1990 .

[32]  N. Bracht Health promotion at the community level , 1990 .

[33]  Don Nutbeam,et al.  Evaluating Health Promotion—Progress, Problems and solutions , 1998 .

[34]  D. Sackett,et al.  Evidence based medicine: what it is and what it isn't , 1996, BMJ.

[35]  Jane Hall,et al.  Evaluating health promotion : a health worker's guide , 1990 .

[36]  S. Platt,et al.  Researching Health Promotion , 2000 .

[37]  James P. Connell and Anne C. Kubisch Applying a Theory of Change Approach to the Evaluation of Comprehensive Community Initiatives: Progress, Prospects, and Problems , 1998 .