Med-BERT: pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction

Deep learning (DL) based predictive models from electronic health records (EHR) deliver impressive performance in many clinical tasks. Large training cohorts, however, are often required to achieve high accuracy, hindering the adoption of DL-based models in scenarios with limited training data size. Recently, bidirectional encoder representations from transformers (BERT) and related models have achieved tremendous successes in the natural language processing domain. The pre-training of BERT on a very large training corpus generates contextualized embeddings that can boost the performance of models trained on smaller datasets. We propose Med-BERT, which adapts the BERT framework for pre-training contextualized embedding models on structured diagnosis data from 28,490,650 patients EHR dataset. Fine-tuning experiments are conducted on two disease-prediction tasks: (1) prediction of heart failure in patients with diabetes and (2) prediction of pancreatic cancer from two clinical databases. Med-BERT substantially improves prediction accuracy, boosting the area under receiver operating characteristics curve (AUC) by 2.02-7.12%. In particular, pre-trained Med-BERT substantially improves the performance of tasks with very small fine-tuning training sets (300-500 samples) boosting the AUC by more than 20% or equivalent to the AUC of 10 times larger training set. We believe that Med-BERT will benefit disease-prediction studies with small local training datasets, reduce data collection expenses, and accelerate the pace of artificial intelligence aided healthcare.

[1]  Qiang Yang,et al.  A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.

[2]  Jesse Vig,et al.  A Multiscale Visualization of Attention in the Transformer Model , 2019, ACL.

[3]  Michael V. McConnell,et al.  Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning , 2017, Nature Biomedical Engineering.

[4]  Wei-Hung Weng,et al.  Publicly Available Clinical BERT Embeddings , 2019, Proceedings of the 2nd Clinical Natural Language Processing Workshop.

[5]  Cordelia Schmid,et al.  VideoBERT: A Joint Model for Video and Language Representation Learning , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[6]  Fei Wang,et al.  Readmission prediction via deep contextual embedding of clinical concepts , 2018, PloS one.

[7]  D. Holdstock Past, present--and future? , 2005, Medicine, conflict, and survival.

[8]  Giovanni Montana,et al.  Predicting Alzheimer's disease: a neuroimaging study with 3D convolutional neural networks , 2015, ICPRAM 2015.

[9]  Sebastian Ruder,et al.  Universal Language Model Fine-tuning for Text Classification , 2018, ACL.

[10]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[11]  Chen Sun,et al.  Revisiting Unreasonable Effectiveness of Data in Deep Learning Era , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[12]  Yiming Yang,et al.  XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.

[13]  Marc Cuggia,et al.  Factors influencing the development of primary care data collection projects from electronic health records: a systematic review of the literature , 2017, BMC Medical Informatics and Decision Making.

[14]  S. Gabriel,et al.  Systematic Review of the Literature , 2021, Adherence to Antiretroviral Therapy among Perinatal Women in Guyana.

[15]  Jimeng Sun,et al.  Pre-training of Graph Augmented Transformers for Medication Recommendation , 2019, IJCAI.

[16]  Dennis Andersson,et al.  A retrospective cohort study , 2018 .

[17]  Lysandre Debut,et al.  HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.

[18]  Peter Szolovits,et al.  MIMIC-III, a freely accessible critical care database , 2016, Scientific Data.

[19]  Geoffrey E. Hinton,et al.  A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.

[20]  Min Chen,et al.  Disease Prediction by Machine Learning Over Big Data From Healthcare Communities , 2017, IEEE Access.

[21]  Haipeng Shen,et al.  Artificial intelligence in healthcare: past, present and future , 2017, Stroke and Vascular Neurology.

[22]  Rajesh Ranganath,et al.  ClinicalBERT: Modeling Clinical Notes and Predicting Hospital Readmission , 2019, ArXiv.

[23]  知秀 柴田 5分で分かる!? 有名論文ナナメ読み:Jacob Devlin et al. : BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding , 2020 .

[24]  Isaac S Kohane,et al.  Artificial Intelligence in Healthcare , 2019, Artificial Intelligence and Machine Learning for Business for Non-Engineers.

[25]  Akshay S. Desai,et al.  2017 Cardiovascular and Stroke Endpoint Definitions for Clinical Trials. , 2018, Journal of the American College of Cardiology.

[26]  Casper J. P. Zhang,et al.  Artificial Intelligence Versus Clinicians in Disease Diagnosis: Systematic Review , 2019, JMIR medical informatics.

[27]  Synho Do,et al.  How much data is needed to train a medical image deep learning system to achieve necessary high accuracy , 2015, 1511.06348.

[28]  Zeeshan Ahmed,et al.  Artificial intelligence with multi-functional machine learning platform development for better healthcare and precision medicine , 2020, Database J. Biol. Databases Curation.

[29]  Iz Beltagy,et al.  SciBERT: A Pretrained Language Model for Scientific Text , 2019, EMNLP.

[30]  N. Razavian,et al.  Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning , 2018, Nature Medicine.

[31]  Omer Levy,et al.  Improving Distributional Similarity with Lessons Learned from Word Embeddings , 2015, TACL.

[32]  Jimmy J. Lin,et al.  DocBERT: BERT for Document Classification , 2019, ArXiv.

[33]  Eva Schlinger,et al.  How Multilingual is Multilingual BERT? , 2019, ACL.

[34]  Mark Chen,et al.  Language Models are Few-Shot Learners , 2020, NeurIPS.

[35]  K. Bhaskaran,et al.  Data Resource Profile: Clinical Practice Research Datalink (CPRD) , 2015, International journal of epidemiology.

[36]  Jeffrey Dean,et al.  Scalable and accurate deep learning with electronic health records , 2018, npj Digital Medicine.

[37]  E. Mohammadi,et al.  Barriers and facilitators related to the implementation of a physiological track and trigger system: A systematic review of the qualitative evidence , 2017, International journal for quality in health care : journal of the International Society for Quality in Health Care.

[38]  Walter F. Stewart,et al.  Doctor AI: Predicting Clinical Events via Recurrent Neural Networks , 2015, MLHC.

[39]  Tomas Mikolov,et al.  Enriching Word Vectors with Subword Information , 2016, TACL.

[40]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[41]  Sebastian Thrun,et al.  Dermatologist-level classification of skin cancer with deep neural networks , 2017, Nature.

[42]  M. Mildner,et al.  Re-epithelialization and immune cell behaviour in an ex vivo human skin model , 2020, Scientific Reports.

[43]  Samy S. Abu-Naser,et al.  Parkinson’s Disease Prediction Using Artificial Neural Network , 2019 .

[44]  James E. Tcheng,et al.  2017 Cardiovascular and Stroke Endpoint Definitions for Clinical Trials , 2018, Circulation.

[45]  Yoshua Bengio,et al.  Why Does Unsupervised Pre-training Help Deep Learning? , 2010, AISTATS.

[46]  Kevin Gimpel,et al.  ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , 2019, ICLR.

[47]  Priyanka Gupta,et al.  Transfer Learning for Clinical Time Series Analysis Using Deep Neural Networks , 2019, Journal of Healthcare Informatics Research.

[48]  Yoshua Bengio,et al.  Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling , 2014, ArXiv.

[49]  Fei Shen,et al.  Machine Health Monitoring Using Local Feature-Based Gated Recurrent Unit Networks , 2018, IEEE Transactions on Industrial Electronics.

[50]  K. Premalatha,et al.  An effective feature selection for heart disease prediction with aid of hybrid kernel SVM , 2017 .

[51]  Gunasekaran Manogaran,et al.  Health data analytics using scalable logistic regression with stochastic gradient descent , 2018, Int. J. Adv. Intell. Paradigms.

[52]  Jeffrey Dean,et al.  Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.

[53]  Seok Won Chung,et al.  Automated detection and classification of the proximal humerus fracture by using deep learning algorithm , 2018, Acta orthopaedica.

[54]  T. Williams,et al.  Clinical Practice Research Datalink (CPRD) , 2021 .

[55]  T. Davenport,et al.  The potential for artificial intelligence in healthcare , 2019, Future Healthcare Journal.

[56]  Ilya Sutskever,et al.  Language Models are Unsupervised Multitask Learners , 2019 .

[57]  Jimeng Sun,et al.  RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism , 2016, NIPS.

[58]  Fei Wang,et al.  Patient Subtyping via Time-Aware LSTM Networks , 2017, KDD.

[59]  Luke S. Zettlemoyer,et al.  Deep Contextualized Word Representations , 2018, NAACL.

[60]  K. Ngiam,et al.  AI-Assisted Decision-making in Healthcare , 2019, Asian Bioethics Review.

[61]  D. Blumenthal,et al.  The "meaningful use" regulation for electronic health records. , 2010, The New England journal of medicine.

[62]  Tianxi Cai,et al.  Clinical Concept Embeddings Learned from Massive Sources of Multimodal Medical Data , 2018, PSB.

[63]  Hua Xu,et al.  Asthma Exacerbation Prediction and Risk Factor Analysis Based on a Time-Sensitive, Attentive Neural Network: Retrospective Cohort Study , 2019, Journal of medical Internet research.

[64]  A. Tekkeşin Artificial Intelligence in Healthcare: Past, Present and Future. , 2019, Anatolian journal of cardiology.

[65]  Fenglong Ma,et al.  Dipole: Diagnosis Prediction in Healthcare via Attention-based Bidirectional Recurrent Neural Networks , 2017, KDD.

[66]  Yixin Chen,et al.  Predicting Hospital Readmission via Cost-Sensitive Deep Learning , 2018, IEEE/ACM Transactions on Computational Biology and Bioinformatics.

[67]  Alec Radford,et al.  Improving Language Understanding by Generative Pre-Training , 2018 .

[68]  Thomas Wolf,et al.  HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.

[69]  Jaewoo Kang,et al.  BioBERT: a pre-trained biomedical language representation model for biomedical text mining , 2019, Bioinform..

[70]  Yang Xiang,et al.  Time-sensitive clinical concept embeddings learned from large electronic health records , 2019, BMC Medical Informatics and Decision Making.

[71]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[72]  Jimmy Lin,et al.  Exploring the Limits of Simple Learners in Knowledge Distillation for Document Classification with DocBERT , 2020, RepL4NLP@ACL.

[73]  Maosong Sun,et al.  ERNIE: Enhanced Language Representation with Informative Entities , 2019, ACL.

[74]  Kazem Rahimi,et al.  BEHRT: Transformer for Electronic Health Records , 2019, Scientific Reports.