Harnessing multiple models for outbreak management

Expert elicitation methods and a structured decision-making framework will help account for risk and uncertainty The coronavirus disease 2019 (COVID-19) pandemic has triggered efforts by multiple modeling groups to forecast disease trajectory, assess interventions, and improve understanding of the pathogen. Such models can often differ substantially in their projections and recommendations, reflecting different policy assumptions and objectives, as well as scientific, logistical, and other uncertainty about biological and management processes (1). Disparate predictions during any outbreak can hinder intervention planning and response by policy-makers (2, 3), who may instead choose to rely on single trusted sources of advice, or on consensus where it appears. Thus, valuable insights and information from other models may be overlooked, limiting the opportunity for decision-makers to account for risk and uncertainty and resulting in more lives lost or resources used than necessary. We advocate a more systematic approach, by merging two well-established research fields. The first element involves formal expert elicitation methods applied to multiple models to deliberately generate, retain, and synthesize valuable individual model ideas and share important insights during group discussions, while minimizing various cognitive biases. The second element uses a decision-theoretic framework to capture and account for within- and between-model uncertainty as we evaluate actions in a timely manner to achieve management objectives.

[1]  Eric R. Dougherty,et al.  Consensus and conflict among ecological forecasts of Zika virus outbreaks in the United States , 2017, bioRxiv.

[2]  Mark A. Burgman,et al.  Trusting Judgements: How to Get the Best out of Experts , 2015 .

[3]  Michael J. Tildesley,et al.  Adaptive Management and the Value of Information: Learning Via Intervention in Epidemiology , 2014, PLoS biology.

[4]  Gerardo Chowell,et al.  The RAPIDD ebola forecasting challenge: Synthesis and lessons learnt. , 2017, Epidemics.

[5]  Kim O'Donnell,et al.  Mind the Gap: A Landscape Analysis of Open Source Publishing Tools and Platforms , 2019 .

[6]  M. Jit,et al.  Guidelines for multi-model comparisons of the impact of infectious disease interventions , 2019, BMC Medicine.

[7]  T. N. Krishnamurti,et al.  Improved Weather and Seasonal Climate Forecasts from Multimodel Superensemble. , 1999, Science.

[8]  Matthew J Ferrari,et al.  Essential information: Uncertainty and optimal control of Ebola outbreaks , 2017, Proceedings of the National Academy of Sciences.

[9]  A. Culyer,et al.  Effectiveness and efficiency of methods of dialysis therapy for end-stage renal disease: systematic reviews. , 1998, Health technology assessment.

[10]  A. Carr,et al.  Primary total hip replacement surgery: a systematic review of outcomes and modelling of cost-effectiveness associated with different prostheses. , 1998, Health technology assessment.

[11]  Sarah J. Converse,et al.  Special Issue Article: Adaptive management for biodiversity conservation in an uncertain world Which uncertainty? Using expert elicitation and expected value of information to design an adaptive program , 2011 .

[12]  André L. Delbecq,et al.  A Group Process Model for Problem Identification and Program Planning , 1971 .

[13]  Philip E. Tetlock,et al.  Superforecasting: The Art and Science of Prediction , 2015 .

[14]  E. Dutton Superforecasting: The Art and Science of Prediction , 2016 .

[15]  Robin Gregory,et al.  Structured Decision Making: A Practical Guide to Environmental Management Choices , 2012 .

[16]  Sean C. Anderson,et al.  Improving estimates of population status and trend with superensemble models , 2017 .

[17]  Matthew J Ferrari,et al.  Decision-making for foot-and-mouth disease control: Objectives matter. , 2016, Epidemics.

[18]  Mark A. Burgman,et al.  A practical guide to structured expert elicitation using the IDEA protocol , 2018 .