To RCT or not to RCT: deciding when ‘more evidence is needed’ for public health policy and practice

Background Amid the calls for ‘more public health evidence’, we also need simple understandable methods of determining when more research really is needed. This paper describes a simple decision aid to help policymakers, researchers and other decision makers assess the potential ‘information value’ of a new public health randomised controlled trial. Methods The authors developed a flow chart to help make explicit (1) the user's information needs, (2) the intended use of the new information that the study will produce, (3) the added value of the evidence to be derived from the new study and (4) the levels of precision, bias and generalisability required by the user. Results The flow chart is briefly illustrated, first in generic form and then in a worked example, showing how it may be used in deciding whether a new study should be commissioned to evaluate the health impact of allowing motorcycles to use bus lanes in London. Conclusions In this paper, the authors have presented a flow chart for enacting an informal ‘Value-of-Information’-like approach to deciding when a new public health evaluation is needed. The authors do not suggest that the flow chart approach is technically the equivalent of Value-of-Information methods. Nonetheless, it represents a valuable perspective and process to adopt, and this structured approach will be more revealing than an unstructured thought experiment as the basis for decisions about a new study. To aid in its development as an effective tool, we invite users from a variety of perspectives and contexts to review it, to use it in practice and to send us their comments.

[1]  T. Lancet Evaluation: the top priority for global health , 2010, The Lancet.

[2]  Andy Haines,et al.  A framework for mandatory impact evaluation to ensure well informed public policy decisions , 2010, The Lancet.

[3]  G. Watts Thyroid disease in women is linked to non-stick chemicals, study finds , 2010, BMJ : British Medical Journal.

[4]  P. Glasziou,et al.  Avoidable waste in the production and reporting of research evidence , 2009, The Lancet.

[5]  Nicola J Cooper,et al.  Evidence synthesis as the key to more coherent and efficient research , 2009, BMC medical research methodology.

[6]  M. Petticrew,et al.  Randomised controlled trials of social interventions: report of a pilot study of barriers and facilitators in an international context , 2008 .

[7]  Nicola J Cooper,et al.  Evidence‐based sample size calculations based upon updated meta‐analysis , 2007, Statistics in medicine.

[8]  Zaid Chalabi,et al.  Efficiency, Equity, and Budgetary Policies , 2007, Medical decision making : an international journal of the Society for Medical Decision Making.

[9]  Paul Glasziou,et al.  When are randomised trials unnecessary? Picking signal from noise , 2007, BMJ : British Medical Journal.

[10]  Judy. Orme,et al.  Public Health for the 21st Century. New perspectives on policy, participation and practice (Chapter 5) , 2007 .

[11]  R. Baltussen,et al.  Priority setting of health interventions: the need for multi-criteria decision analysis , 2006, Cost effectiveness and resource allocation : C/E.

[12]  Alan Shiell,et al.  Complex interventions: how “out of control” can a randomised controlled trial be? , 2004, BMJ : British Medical Journal.

[13]  A E Ades,et al.  Expected Value of Sample Information Calculations in Medical Decision Modeling , 2004, Medical decision making : an international journal of the Society for Medical Decision Making.

[14]  e-Comms Team Securing Good Health for the Whole Population , 2004 .

[15]  A. Oxman,et al.  Health policy-makers' perceptions of their use of evidence: a systematic review , 2002, Journal of health services research & policy.

[16]  S. Nutley,et al.  Evidence and the policy process , 2000 .

[17]  Peter C Smith,et al.  What works?Evidence-based policy and practice in public services , 2000 .

[18]  Jan Youtie,et al.  Using an evaluability assessment to select methods for evaluating state technology development programs: the case of the Georgia Research Alliance , 1999 .

[19]  J. Gray Evidence-Based Healthcare , 1997 .

[20]  N. Black Why we need observational studies to evaluate the effectiveness of health care , 1996, BMJ.

[21]  M. Sculpher,et al.  Using Value of Information Analysis to Prioritise Health Research , 2012, PharmacoEconomics.

[22]  C. O’Donnell,et al.  Evaluating complex interventions: one size does not fit all , 2010 .

[23]  L. Potvin Yes! More research is needed; but not just any research , 2009, International Journal of Public Health.

[24]  A. Haines Report of the Public Health Sciences Working Group convened by the Wellcome Trust: Public Health Sciences: Challenges and Opportunities , 2004 .

[25]  J. Dowie Health impact: its estimation, assessment and analysis , 2003 .

[26]  G. MacDonald,et al.  What Works? Evidence-Based Policy and Practice in Public Services , 2000 .

[27]  C. Roelofs Prevention effectiveness: A guide to decision analysis and economic evaluation , 1997 .