Interpretable Deep Learning Framework for Predicting all-cause 30-day ICU Readmissions

ICU readmissions are costly and most of the early ICU readmissions in the United States are potentially avoidable. After the US Govts push towards reducing avoidable readmissions, there has been a surge in research and analyses for reducing the readmission rates. Widespread adoption of Electronic Health Records(EHRs) has made large amount of clinical data available for analysis. It has provided new opportunities to discover meaningful data-driven characteristics and implement machine learning algorithms. Sequential characteristics present in EHR data can be harnessed using state-of-the-art deep learning algorithms. While there has been rapid adoption of deep models in many domains, in Healthcare sector however, their adoption has been slow owing to lack of interpretability of these black-box models. Hence, many clinical applications still prefer simple but interpretable machine learning models. In this project, we have implemented a Knowledge-Distillation approach called Interpretable Mimic Learning for predicting 30-day ICU readmissions. Using this approach, the knowledge of deep models can be transferred to simple and interpretable models and we can combine accuracy and sequential learning of deep models with interpretability of simple models. Keywords—ICU Readmissions, Deep Learning model, Interpretability, RNN, LSTM.

[1]  Johannes Gehrke,et al.  Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission , 2015, KDD.

[2]  Jimeng Sun,et al.  RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism , 2016, NIPS.

[3]  Fei-Fei Li,et al.  Visualizing and Understanding Recurrent Networks , 2015, ArXiv.

[4]  S. Ratcliffe,et al.  An Empirical Derivation of the Optimal Time Interval for Defining ICU Readmissions , 2013, Medical care.

[5]  A. Rosenberg,et al.  Patients readmitted to ICUs* : a systematic review of risk factors and outcomes. , 2000, Chest.

[6]  Charles Elkan,et al.  Learning to Diagnose with LSTM Recurrent Neural Networks , 2015, ICLR.

[7]  Evan G. Wong,et al.  Association of severity of illness and intensive care unit readmission: A systematic review. , 2016, Heart & lung : the journal of critical care.

[8]  Ari Ercole,et al.  Prediction of early unplanned intensive care unit readmission in a UK tertiary care hospital: a cross-sectional machine learning approach , 2017, BMJ Open.

[9]  Jacek M. Zurada,et al.  Could Decision Trees Improve the Classification Accuracy and Interpretability of Loan Granting Decisions? , 2010, 2010 43rd Hawaii International Conference on System Sciences.

[10]  Geoffrey E. Hinton,et al.  Distilling the Knowledge in a Neural Network , 2015, ArXiv.

[11]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[12]  Rich Caruana,et al.  Do Deep Nets Really Need to be Deep? , 2013, NIPS.

[13]  Aram Galstyan,et al.  Multitask learning and benchmarking with clinical time series data , 2017, Scientific Data.

[14]  Volker Tresp,et al.  Predicting Clinical Events by Combining Static and Dynamic Information Using Recurrent Neural Networks , 2016, 2016 IEEE International Conference on Healthcare Informatics (ICHI).

[15]  Yan Liu,et al.  Interpretable Deep Models for ICU Outcome Prediction , 2016, AMIA.

[16]  Walter F. Stewart,et al.  Doctor AI: Predicting Clinical Events via Recurrent Neural Networks , 2015, MLHC.

[17]  Kendiss Olafson,et al.  High occupancy increases the risk of early death or readmission after transfer from intensive care * , 2009, Critical care medicine.

[18]  Zachary Chase Lipton The mythos of model interpretability , 2016, ACM Queue.

[19]  N Snow,et al.  Readmission of patients to the surgical intensive care unit: Patient profiles and possibilities for prevention , 1985, Critical care medicine.

[20]  Sharon Einav,et al.  Intensive care outflow limitation--frequency, etiology, and impact. , 2003, Journal of critical care.