Neighborhood Contrastive Learning Applied to Online Patient Monitoring

Intensive care units (ICU) are increasingly looking towards machine learning for methods to provide online monitoring of critically ill patients. In machine learning, online monitoring is often formulated as a supervised learning problem. Recently, contrastive learning approaches have demonstrated promising improvements over competitive supervised benchmarks. These methods rely on well-understood data augmentation techniques developed for image data which do not apply to online monitoring. In this work, we overcome this limitation by supplementing time-series data augmentation techniques with a novel contrastive learning objective which we call neighborhood contrastive learning (NCL). Our objective explicitly groups together contiguous time segments from each patient while maintaining state-specific information. Our experiments demonstrate a marked improvement over existing work applying contrastive methods to medical time-series.

[1]  Aapo Hyvärinen,et al.  Noise-contrastive estimation: A new estimation principle for unnormalized statistical models , 2010, AISTATS.

[2]  Geoffrey E. Hinton,et al.  Big Self-Supervised Models are Strong Semi-Supervised Learners , 2020, NeurIPS.

[3]  Mehdi Fatemi,et al.  An Empirical Study of Representation Learning for Reinforcement Learning in Healthcare , 2020, ML4H@NeurIPS.

[4]  Germain Forestier,et al.  Data augmentation using synthetic data for time series classification with deep residual networks , 2018, ArXiv.

[5]  Aram Galstyan,et al.  Multitask learning and benchmarking with clinical time series data , 2017, Scientific Data.

[6]  Karsten M. Borgwardt,et al.  Early prediction of circulatory failure in the intensive care unit using machine learning , 2020, Nature Medicine.

[7]  Oncel Tuzel,et al.  Subject-Aware Contrastive Learning for Biosignals , 2020, ArXiv.

[8]  Li Li,et al.  Deep Patient: An Unsupervised Representation to Predict the Future of Patients from the Electronic Health Records , 2016, Scientific Reports.

[9]  Laurens van der Maaten,et al.  Self-Supervised Learning of Pretext-Invariant Representations , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Christian Bock,et al.  Set Functions for Time Series , 2019, ICML.

[11]  Geoffrey E. Hinton,et al.  A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.

[12]  Dana Kulic,et al.  Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks , 2017, ICMI.

[13]  Ce Liu,et al.  Supervised Contrastive Learning , 2020, NeurIPS.

[14]  Shamim Nemati,et al.  Early Prediction of Sepsis From Clinical Data: The PhysioNet/Computing in Cardiology Challenge 2019 , 2019, 2019 Computing in Cardiology (CinC).

[15]  Phillip Isola,et al.  Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere , 2020, ICML.

[16]  Cesare Furlanello,et al.  Deep representation learning of electronic health records to unlock patient stratification at scale , 2020, npj Digital Medicine.

[17]  Aapo Hyvärinen,et al.  Uncovering the structure of clinical EEG signals with self-supervised learning , 2020, Journal of neural engineering.

[18]  Yoshua Bengio,et al.  Adversarial Domain Adaptation for Stable Brain-Machine Interfaces , 2018, ICLR.

[19]  Dani Kiyasseh,et al.  CLOCS: Contrastive Learning of Cardiac Signals , 2020, ArXiv.

[20]  Fei Wang,et al.  A Time-Phased Machine Learning Model for Real-Time Prediction of Sepsis in Critical Care , 2020, Critical care medicine.

[21]  Vladlen Koltun,et al.  An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling , 2018, ArXiv.

[22]  Martin Jaggi,et al.  Unsupervised Scalable Representation Learning for Multivariate Time Series , 2019, NeurIPS.

[23]  Dani Kiyasseh,et al.  CLOCS: Contrastive Learning of Cardiac Signals , 2020, ICML.

[24]  Peter Szolovits,et al.  A Comprehensive Evaluation of Multi-task Learning and Multi-task Pre-training on EHR Time-series Data , 2020, ArXiv.

[25]  Peter Szolovits,et al.  MIMIC-III, a freely accessible critical care database , 2016, Scientific Data.

[26]  Leo Anthony Celi,et al.  Real-time prediction of COVID-19 related mortality using electronic health records , 2020, Nature Communications.

[27]  Kaiming He,et al.  Improved Baselines with Momentum Contrastive Learning , 2020, ArXiv.

[28]  Ye Wang,et al.  Learning Invariant Representations From EEG via Adversarial Inference , 2020, IEEE Access.

[29]  Ashish Sharma,et al.  Early Prediction of Sepsis from Clinical Data: the PhysioNet/Computing in Cardiology Challenge 2019 , 2019, 2019 Computing in Cardiology (CinC).

[30]  Leo Celi,et al.  Evaluating Progress on Machine Learning for Longitudinal Electronic Healthcare Data , 2020, ArXiv.

[31]  Jeffrey Dean,et al.  Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.

[32]  Suproteem K. Sarkar,et al.  Contrastive Representation Learning for Electroencephalogram Classification , 2020, ML4H@NeurIPS.

[33]  Cordelia Schmid,et al.  What makes for good views for contrastive learning , 2020, NeurIPS.

[34]  Suman V. Ravuri,et al.  A Clinically Applicable Approach to Continuous Prediction of Future Acute Kidney Injury , 2019, Nature.

[35]  Kaiming He,et al.  Momentum Contrast for Unsupervised Visual Representation Learning , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[36]  Majid Sarrafzadeh,et al.  Unsupervised Representation for EHR Signals and Codes as Patient Status Vector , 2019, ArXiv.

[37]  Motoaki Kawanabe,et al.  Learning a common dictionary for subject-transfer decoding with resting calibration , 2015, NeuroImage.

[38]  Julien Mairal,et al.  Unsupervised Learning of Visual Features by Contrasting Cluster Assignments , 2020, NeurIPS.

[39]  Gunnar Rätsch,et al.  Improving Clinical Predictions through Unsupervised Time Series Representation Learning , 2018, ArXiv.