Estimating real-world performance of a predictive model: a case-study in predicting mortality

Abstract Objective One primary consideration when developing predictive models is downstream effects on future model performance. We conduct experiments to quantify the effects of experimental design choices, namely cohort selection and internal validation methods, on (estimated) real-world model performance. Materials and Methods Four years of hospitalizations are used to develop a 1-year mortality prediction model (composite of death or initiation of hospice care). Two common methods to select appropriate patient visits from their encounter history (backwards-from-outcome and forwards-from-admission) are combined with 2 testing cohorts (random and temporal validation). Two models are trained under otherwise identical conditions, and their performances compared. Operating thresholds are selected in each test set and applied to a “real-world” cohort of labeled admissions from another, unused year. Results Backwards-from-outcome cohort selection retains 25% of candidate admissions (n = 23 579), whereas forwards-from-admission selection includes many more (n = 92 148). Both selection methods produce similar performances when applied to a random test set. However, when applied to the temporally defined “real-world” set, forwards-from-admission yields higher areas under the ROC and precision recall curves (88.3% and 56.5% vs. 83.2% and 41.6%). Discussion A backwards-from-outcome experiment manipulates raw training data, simplifying the experiment. This manipulated data no longer resembles real-world data, resulting in optimistic estimates of test set performance, especially at high precision. In contrast, a forwards-from-admission experiment with a temporally separated test set consistently and conservatively estimates real-world performance. Conclusion Experimental design choices impose bias upon selected cohorts. A forwards-from-admission experiment, validated temporally, can conservatively estimate real-world performance. LAY SUMMARY The routine care of patients stands to benefit greatly from assistive technologies, including data-driven risk assessment. Already, many different machine learning and artificial intelligence applications are being developed from complex electronic health record data. To overcome challenges that arise from such data, researchers often start with simple experimental approaches to test their work. One key component is how patients (and their healthcare visits) are selected for the study from the pool of all patients seen. Another is how the group of patients used to create the risk estimator differs from the group used to evaluate how well it works. These choices complicate how the experimental setting compares to the real-world application to patients. For example, different selection approaches that depend on each patient’s future outcome can simplify the experiment but are impractical upon implementation as these data are unavailable. We show that this kind of “backwards” experiment optimistically estimates how well the model performs. Instead, our results advocate for experiments that select patients in a “forwards” manner and “temporal” validation that approximates training on past data and implementing on future data. More robust results help gauge the clinical utility of recent works and aid decision-making before implementation into practice.

[1]  Jimeng Sun,et al.  Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review , 2018, J. Am. Medical Informatics Assoc..

[2]  Guanhua Chen,et al.  Calibration Drift Among Regression and Machine Learning Models for Hospital Mortality , 2017, AMIA.

[3]  E. Bruera,et al.  Impact of timing and setting of palliative care referral on quality of end‐of‐life care in cancer patients , 2014, Cancer.

[4]  John P. A. Ioannidis,et al.  Opportunities and challenges in developing risk prediction models with electronic health records data: a systematic review , 2017, J. Am. Medical Informatics Assoc..

[5]  C. van Walraven,et al.  mHOMR: a feasibility study of an automated system for identifying inpatients having an elevated risk of 1-year mortality , 2019, BMJ Quality & Safety.

[6]  Xiangzhou Zhang,et al.  Multi-perspective predictive modeling for acute kidney injury in general hospital populations using electronic medical records , 2018, JAMIA open.

[7]  Eric R. Ziegel,et al.  The Elements of Statistical Learning , 2003, Technometrics.

[8]  K. Amano,et al.  The Accuracy of Physicians' Clinical Predictions of Survival in Patients With Advanced Cancer. , 2015, Journal of pain and symptom management.

[9]  Andrew Y. Ng,et al.  Improving palliative care with deep learning , 2017, 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM).

[10]  K. Covinsky,et al.  Assessing the Generalizability of Prognostic Information , 1999, Annals of Internal Medicine.

[11]  P. Shekelle,et al.  Evidence-Based Interventions to Improve the Palliative Care of Pain, Dyspnea, and Depression at the End of Life: A Clinical Practice Guideline from the American College of Physicians , 2008, Annals of Internal Medicine.

[12]  Michael E. Matheny,et al.  Developing a Testing Procedure to Select Model Updating Methods , 2018, AMIA.

[13]  A. Abernethy,et al.  Development and Validation of a High‐Quality Composite Real‐World Mortality Endpoint , 2018, Health services research.

[14]  Yvonne Vergouwe,et al.  Towards better clinical prediction models: seven steps for development and an ABCD for validation. , 2014, European heart journal.

[15]  D. Bates,et al.  Development and Validation of a Deep Learning Algorithm for Mortality Prediction in Selecting Patients With Dementia for Earlier Palliative Care Interventions , 2019, JAMA network open.

[16]  Pierre Baldi,et al.  A CROC stronger than ROC: measuring, visualizing and optimizing early retrieval , 2010, Bioinform..

[17]  D. Au,et al.  Symptom Burden and Palliative Care Needs Among High-Risk Veterans With Multimorbidity. , 2019, Journal of pain and symptom management.

[18]  Ravi B. Parikh,et al.  A machine learning approach to predicting short-term mortality risk for patients starting chemotherapy. , 2017 .

[19]  M. Mamdani,et al.  Palliative Care in the Twenty-First Century: Using Advanced Analytics to Uncloak Insights from Big Data. , 2019, Journal of palliative medicine.

[20]  N. Christakis,et al.  Extent and determinants of error in doctors' prognoses in terminally ill patients: prospective cohort study , 2000, BMJ : British Medical Journal.

[21]  A. Feinstein,et al.  Problems of spectrum and bias in evaluating the efficacy of diagnostic tests. , 1978, The New England journal of medicine.

[22]  Carl van Walraven,et al.  External validation of the Hospital-patient One-year Mortality Risk (HOMR) model for predicting death within 1 year after hospital admission , 2015, Canadian Medical Association Journal.

[23]  Paul Glare,et al.  A systematic review of physicians' survival predictions in terminally ill cancer patients , 2003, BMJ : British Medical Journal.