Systematic reviews of health effects of social interventions: 2. Best available evidence: how low should you go?

Study objective: There is little guidance on how to select the best available evidence of health effects of social interventions. The aim of this paper was to assess the implications of setting particular inclusion criteria for evidence synthesis. Design: Analysis of all relevant studies for one systematic review, followed by sensitivity analysis of the effects of selecting studies based on a two dimensional hierarchy of study design and study population. Setting: Case study of a systematic review of the effectiveness of interventions in promoting a population shift from using cars towards walking and cycling. Main results: The distribution of available evidence was skewed. Population level interventions were less likely than individual level interventions to have been studied using the most rigorous study designs; nearly all of the population level evidence would have been missed if only randomised controlled trials had been included. Examining the studies that were excluded did not change the overall conclusions about effectiveness, but did identify additional categories of intervention such as health walks and parking charges that merit further research, and provided evidence to challenge assumptions about the actual effects of progressive urban transport policies. Conclusions: Unthinking adherence to a hierarchy of study design as a means of selecting studies may reduce the value of evidence synthesis and reinforce an “inverse evidence law” whereby the least is known about the effects of interventions most likely to influence whole populations. Producing generalisable estimates of effect sizes is only one possible objective of evidence synthesis. Mapping the available evidence and uncertainty about effects may also be important.

[1]  Kay Dickersin,et al.  Systematic reviews in epidemiology: why are we so far behind? , 2002, International journal of epidemiology.

[2]  P. Alderson,et al.  Should journals publish systematic reviews that find no evidence to guide practice? Examples from injury research , 2000, BMJ : British Medical Journal.

[3]  T. Kaptchuk,et al.  Effect of interpretive bias on research evidence. , 2003, BMJ.

[4]  D. Yach,et al.  Interventions for preventing tobacco sales to minors : Cochrane systematic review. Commentary , 2006 .

[5]  X. Bonfill,et al.  Interventions for preventing tobacco smoking in public places. , 2008, The Cochrane database of systematic reviews.

[6]  David Ogilvie,et al.  Evaluating the health effects of social interventions , 2004, BMJ : British Medical Journal.

[7]  Sandy Oliver,et al.  Young people and physical activity: a systematic review of research on barriers and facilitators , 2001 .

[8]  Stafford Beer,et al.  Experiment—A Series of Scientific Case Histories , 1964 .

[9]  Alison Alborz,et al.  Developing methods for systematic reviewing in health services delivery and organization: an example from a review of access to health care for people with learning disabilities. Part 1. Identifying the literature. , 2004, Health information and libraries journal.

[10]  H Roberts,et al.  Evidence, hierarchies, and typologies: horses for courses , 2003, Journal of epidemiology and community health.

[11]  R. Slavin Best evidence synthesis: an intelligent alternative to meta-analysis. , 1995, Journal of clinical epidemiology.

[12]  M. Barreto Efficacy, effectiveness, and the evaluation of public health interventions , 2005, Journal of Epidemiology and Community Health.

[13]  David Ogilvie,et al.  Systematic reviews of health effects of social interventions: 1. Finding the evidence: how far should you go? , 2005, Journal of Epidemiology and Community Health.

[14]  Is the Scientific Paper a Fraud? or: Who's Got the Paper? , 1965, Canadian Medical Association journal.

[15]  Philip K. Bock Ethical Approval Not Required , 1994 .

[16]  Don Nutbeam,et al.  How does evidence influence public health policy? Tackling health inequalities in England , 2003 .

[17]  J. Moller,et al.  Reconsidering community based interventions , 2004, Injury Prevention.

[18]  K. Tones,et al.  Towards a secure evidence base for health promotion. , 1999, Journal of public health medicine.

[19]  L. Stead,et al.  Interventions for preventing tobacco sales to minors. , 2005, The Cochrane database of systematic reviews.

[20]  S. Oliver,et al.  Discrepancies in findings from effectiveness reviews: the case of health promotion interventions to change cholesterol levels , 1999 .

[21]  A. Boaz,et al.  EVIDENCE BASED PUBLIC HEALTH POLICY AND PRACTICE Enhancing the evidence base for health impact assessment , 2004 .

[22]  Martyn Hammersley Systematic or unsystematic, is that the question? Reflections on the science, art, and politics of reviewing research evidence , 2006 .

[23]  G. Schierhout,et al.  The private life of systematic reviews , 1997 .

[24]  S. Hawker,et al.  Appraising the Evidence: Reviewing Disparate Data Systematically , 2002, Qualitative health research.

[25]  M. Petticrew,et al.  Promoting walking and cycling as an alternative to using cars: systematic review , 2004, BMJ : British Medical Journal.

[26]  H. Graham,et al.  Evidence for public health policy on inequalities: 1: The reality according to policymakers , 2004, Journal of Epidemiology and Community Health.

[27]  M. Petticrew Presumed innocent. Why we need systematic reviews of social policies. , 2003, American journal of preventive medicine.

[28]  Paul Glasziou,et al.  Assessing the quality of research , 2004, BMJ : British Medical Journal.

[29]  Paul D. Williams,et al.  Should All Literature Reviews be Systematic? , 2000 .

[30]  M. Frommer,et al.  Criteria for evaluating evidence on public health interventions , 2002, Journal of epidemiology and community health.