Reporting of demographic data and representativeness in machine learning models using electronic health records

OBJECTIVE The development of machine learning (ML) algorithms to address a variety of issues faced in clinical practice has increased rapidly. However, questions have arisen regarding biases in their development that can affect their applicability in specific populations. We sought to evaluate whether studies developing ML models from electronic health record (EHR) data report sufficient demographic data on the study populations to demonstrate representativeness and reproducibility. MATERIALS AND METHODS We searched PubMed for articles applying ML models to improve clinical decision-making using EHR data. We limited our search to papers published between 2015 and 2019. RESULTS Across the 164 studies reviewed, demographic variables were inconsistently reported and/or included as model inputs. Race/ethnicity was not reported in 64%; gender and age were not reported in 24% and 21% of studies, respectively. Socioeconomic status of the population was not reported in 92% of studies. Studies that mentioned these variables often did not report if they were included as model inputs. Few models (12%) were validated using external populations. Few studies (17%) open-sourced their code. Populations in the ML studies include higher proportions of White and Black yet fewer Hispanic subjects compared to the general US population. DISCUSSION The demographic characteristics of study populations are poorly reported in the ML literature based on EHR data. Demographic representativeness in training data and model transparency is necessary to ensure that ML models are deployed in an equitable and reproducible manner. Wider adoption of reporting guidelines is warranted to improve representativeness and reproducibility.

[1]  Michael Gao,et al.  Presenting machine learning model information to clinical end users with model facts labels , 2020, npj Digital Medicine.

[2]  David C. Kale,et al.  Do no harm: a roadmap for responsible machine learning for health care , 2019, Nature Medicine.

[3]  Shuang Wang,et al.  Multivariate analysis of the population representativeness of related clinical studies , 2016, J. Biomed. Informatics.

[4]  Richard D Riley,et al.  External validation of clinical prediction models using big datasets from e-health records or IPD meta-analysis: opportunities and challenges , 2016, BMJ.

[5]  S. Dhaliwal,et al.  Utility of models to predict 28-day or 30-day unplanned hospital readmissions: an updated systematic review , 2016, BMJ Open.

[6]  Karel Moons,et al.  PROBAST: A Tool to Assess Risk of Bias and Applicability of Prediction Model Studies: Explanation and Elaboration , 2019, Annals of Internal Medicine.

[7]  J. Ioannidis,et al.  The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. , 2009, Journal of clinical epidemiology.

[8]  Eli M Cahan,et al.  Putting the data before the algorithm in big data addressing personalized healthcare , 2019, npj Digital Medicine.

[9]  N. Shah,et al.  Implementing Machine Learning in Health Care - Addressing Ethical Challenges. , 2018, The New England journal of medicine.

[10]  J. Ioannidis,et al.  Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies , 2020, BMJ.

[11]  Michael V. McConnell,et al.  Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning , 2017, Nature Biomedical Engineering.

[12]  Suchi Saria,et al.  Better medicine through machine learning: What’s real, and what’s artificial? , 2018, PLoS medicine.

[13]  S. Tamang,et al.  Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data , 2018, JAMA internal medicine.

[14]  Michael M. Vigoda,et al.  Future of electronic health records: implications for decision support. , 2012, The Mount Sinai journal of medicine, New York.

[15]  A. Ng,et al.  Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists , 2018, PLoS medicine.

[16]  Selen Bozkurt,et al.  MINIMAR (MINimum Information for Medical AI Reporting): Developing reporting standards for artificial intelligence in health care , 2020, J. Am. Medical Informatics Assoc..

[17]  Hai Su,et al.  Pathologist-level interpretable whole-slide cancer diagnosis with deep learning , 2019, Nat. Mach. Intell..

[18]  George Hripcsak,et al.  Caveats for the use of operational electronic health record data in comparative effectiveness research. , 2013, Medical care.

[19]  Jimeng Sun,et al.  Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review , 2018, J. Am. Medical Informatics Assoc..

[20]  Julia Adler-Milstein,et al.  Progress In Interoperability: Measuring US Hospitals' Engagement In Sharing Patient Data. , 2017, Health affairs.

[21]  Brian W. Powers,et al.  Dissecting racial bias in an algorithm used to manage the health of populations , 2019, Science.

[22]  J. Habbema,et al.  Internal validation of predictive models: efficiency of some procedures for logistic regression analysis. , 2001, Journal of clinical epidemiology.

[23]  John P. A. Ioannidis,et al.  A manifesto for reproducible science , 2017, Nature Human Behaviour.

[24]  John P. A. Ioannidis,et al.  Opportunities and challenges in developing risk prediction models with electronic health records data: a systematic review , 2017, J. Am. Medical Informatics Assoc..

[25]  Sabine Maguire,et al.  Methodological standards for the development and evaluation of clinical prediction rules: a review of the literature , 2019, Diagnostic and Prognostic Research.

[26]  A. Adamson,et al.  Machine Learning and Health Care Disparities in Dermatology. , 2018, JAMA dermatology.

[27]  David Moher,et al.  The REporting of Studies Conducted Using Observational Routinely-Collected Health Data (RECORD) Statement: Methods for Arriving at Consensus and Developing Reporting Guidelines , 2015, PloS one.

[28]  J. Johnston,et al.  A literature review on the representativeness of randomized controlled trial samples and implications for the external validity of trial results , 2015, Trials.

[29]  Yuan Luo,et al.  Big Data and Data Science in Critical Care. , 2018, Chest.