Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies

Abstract Objective To systematically examine the design, reporting standards, risk of bias, and claims of studies comparing the performance of diagnostic deep learning algorithms for medical imaging with that of expert clinicians. Design Systematic review. Data sources Medline, Embase, Cochrane Central Register of Controlled Trials, and the World Health Organization trial registry from 2010 to June 2019. Eligibility criteria for selecting studies Randomised trial registrations and non-randomised studies comparing the performance of a deep learning algorithm in medical imaging with a contemporary group of one or more expert clinicians. Medical imaging has seen a growing interest in deep learning research. The main distinguishing feature of convolutional neural networks (CNNs) in deep learning is that when CNNs are fed with raw data, they develop their own representations needed for pattern recognition. The algorithm learns for itself the features of an image that are important for classification rather than being told by humans which features to use. The selected studies aimed to use medical imaging for predicting absolute risk of existing disease or classification into diagnostic groups (eg, disease or non-disease). For example, raw chest radiographs tagged with a label such as pneumothorax or no pneumothorax and the CNN learning which pixel patterns suggest pneumothorax. Review methods Adherence to reporting standards was assessed by using CONSORT (consolidated standards of reporting trials) for randomised studies and TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) for non-randomised studies. Risk of bias was assessed by using the Cochrane risk of bias tool for randomised studies and PROBAST (prediction model risk of bias assessment tool) for non-randomised studies. Results Only 10 records were found for deep learning randomised clinical trials, two of which have been published (with low risk of bias, except for lack of blinding, and high adherence to reporting standards) and eight are ongoing. Of 81 non-randomised clinical trials identified, only nine were prospective and just six were tested in a real world clinical setting. The median number of experts in the comparator group was only four (interquartile range 2-9). Full access to all datasets and code was severely limited (unavailable in 95% and 93% of studies, respectively). The overall risk of bias was high in 58 of 81 studies and adherence to reporting standards was suboptimal (<50% adherence for 12 of 29 TRIPOD items). 61 of 81 studies stated in their abstract that performance of artificial intelligence was at least comparable to (or better than) that of clinicians. Only 31 of 81 studies (38%) stated that further prospective studies or trials were required. Conclusions Few prospective deep learning studies and randomised trials exist in medical imaging. Most non-randomised trials are not prospective, are at high risk of bias, and deviate from existing reporting standards. Data and code availability are lacking in most studies, and human comparator groups are often small. Future studies should diminish risk of bias, enhance real world clinical relevance, improve reporting and transparency, and appropriately temper conclusions. Study registration PROSPERO CRD42019123605.

[1]  J. Ioannidis,et al.  Reproducible research practices, transparency, and open access data in the biomedical literature, 2015–2017 , 2018, PLoS biology.

[2]  G. Collins,et al.  Uniformity in measuring adherence to reporting guidelines: the example of TRIPOD for assessing completeness of reporting of prediction model studies , 2019, BMJ Open.

[3]  M. Ratner FDA backs clinician-free AI imaging diagnostic tools , 2018, Nature Biotechnology.

[4]  D. Moher,et al.  CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials , 2010, Journal of clinical epidemiology.

[5]  T. Berzin,et al.  Real-time automatic detection system increases colonoscopic polyp and adenoma detection rates: a prospective randomised controlled study , 2019, Gut.

[6]  Gary S Collins,et al.  Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): Explanation and Elaboration , 2015, Annals of Internal Medicine.

[7]  Karel Moons,et al.  PROBAST: A Tool to Assess Risk of Bias and Applicability of Prediction Model Studies: Explanation and Elaboration , 2019, Annals of Internal Medicine.

[8]  I. Boutron,et al.  Three randomized controlled trials evaluating the impact of “spin” in health news stories reporting studies of pharmacologic treatments on patients’/caregivers’ interpretation of treatment benefit , 2019, BMC Medicine.

[9]  G. Collins,et al.  Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement , 2015, BMJ : British Medical Journal.

[10]  Andre Esteva,et al.  A guide to deep learning in healthcare , 2019, Nature Medicine.

[11]  R. Tibshirani,et al.  Increasing value and reducing waste in research design, conduct, and analysis , 2014, The Lancet.

[12]  Woohyung Lim,et al.  Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: Automatic construction of onychomycosis datasets by region-based convolutional deep neural network , 2018, PloS one.

[13]  et al.,et al.  Framework for the impact analysis and implementation of Clinical Prediction Rules (CPRs) , 2011, BMC Medical Informatics Decis. Mak..

[14]  B. Psaty,et al.  COX-2 inhibitors--lessons in drug safety. , 2005, The New England journal of medicine.

[15]  D. DeMets,et al.  Surrogate End Points in Clinical Trials: Are We Being Misled? , 1996, Annals of Internal Medicine.

[16]  C. Bias The Cochrane Collaboration's tool for assessing risk of bias in randomised trials , 2011 .

[17]  John Hoey,et al.  Clinical trial registration: a statement from the International Committee of Medical Journal Editors. , 2005, The New England journal of medicine.

[18]  G. Collins,et al.  Artificial Intelligence Algorithms for Medical Prediction Should Be Nonproprietary and Readily Available. , 2019, JAMA internal medicine.

[19]  Bibb Allen The Role of the FDA in Ensuring the Safety and Efficacy of Artificial Intelligence Software and Devices. , 2019, Journal of the American College of Radiology : JACR.

[20]  Xiaohang Wu,et al.  Diagnostic Efficacy and Therapeutic Decision-making Capacity of an Artificial Intelligence Platform for Childhood Cataracts in Eye Clinics: A Multicentre Randomized Controlled Trial , 2019, EClinicalMedicine.

[21]  John Hoey,et al.  Clinical trial registration: a statement from the International Committee of Medical Journal Editors. , 2005, Circulation.

[22]  Kristian Thorlund,et al.  Reanalyses of randomized clinical trial data. , 2014, JAMA.

[23]  Sinan Aral,et al.  The spread of true and false news online , 2018, Science.

[24]  J. Sterne,et al.  The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials , 2011, BMJ : British Medical Journal.

[25]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[26]  G. Collins,et al.  PROBAST: A Tool to Assess the Risk of Bias and Applicability of Prediction Model Studies , 2019, Annals of Internal Medicine.

[27]  Rayid Ghani,et al.  Machine learning and AI research for Patient Benefit: 20 Critical Questions on Transparency, Replicability, Ethics and Effectiveness , 2018, ArXiv.

[28]  David Moher,et al.  Reducing waste from incomplete or unusable reports of biomedical research , 2014, The Lancet.

[29]  Petroc Sumner,et al.  Exaggerations and Caveats in Press Releases and Health-Related Science News , 2016, PloS one.

[30]  Brian A. Nosek,et al.  Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015 , 2018, Nature Human Behaviour.

[31]  Gary S. Collins,et al.  Reporting of artificial intelligence prediction models , 2019, The Lancet.

[32]  C. Kalkman,et al.  Evaluating the impact of prediction models: lessons learned, challenges, and recommendations , 2018, Diagnostic and Prognostic Research.

[33]  Eric J Topol,et al.  High-performance medicine: the convergence of human and artificial intelligence , 2019, Nature Medicine.

[34]  Shih-Hwa Chiou,et al.  Artificial intelligence-based decision-making for age-related macular degeneration , 2019, Theranostics.

[35]  D. Moher,et al.  Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement , 2009, BMJ.