Robust and Efficient Medical Imaging with Self-Supervision

, , , , Recent progress in Medical Artificial Intelligence (AI) has delivered systems that can reach clinical expert level performance. However, such systems tend to demonstrate sub-optimal “out-of-distribution” performance when evaluated in clinical settings different from the training environment. A common mitigation strategy is to develop separate systems for each clinical setting using site-specific data [1]. However, this quickly becomes impractical as medical data is time-consuming to acquire and expensive to annotate [2]. Thus, the problem of “data-efficient generalization” presents an ongoing difficulty for Medical AI development. Although progress in representation learning shows promise, their benefits have not been rigorously studied, specifically for out-of-distribution settings. To meet these challenges, we present REMEDIS , a unified representation learning strategy to improve robustness and data-efficiency of medical imaging AI. REMEDIS uses a generic combination of large-scale supervised transfer learning with self-supervised learning and requires little task-specific customization. We study a diverse range of medical imaging tasks and simulate three realistic application scenarios using retrospective data. REMEDIS exhibits significantly improved in-distribution performance with up to 11.5% relative improvement in diagnostic accuracy over a strong supervised baseline. More importantly, our strategy leads to strong data-efficient generalization of medical imaging AI, matching strong supervised baselines using between 1% to 33% of retraining data across tasks. These results suggest that REMEDIS can significantly accelerate the life-cycle of medical imaging AI development thereby presenting an important step forward for medical imaging AI to deliver broad impact. model to the medical domain using intermediate contrastive self-supervised learning without using any labeled medical data. Finally, we fine-tune the model to specific downstream medical imaging AI tasks. We evaluate the AI model both in an in-distribution (ID) setting and in an out-of-distribution (OOD) setting to establish the data-efficient generalization performance of the model.

[1]  Timothy M. Hospedales,et al.  How Well Do Self-Supervised Models Transfer? , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Alexei Baevski,et al.  Effectiveness of self-supervised pre-training for speech recognition , 2019, ArXiv.

[3]  Laurens van der Maaten,et al.  Self-Supervised Learning of Pretext-Invariant Representations , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Ronald M. Summers,et al.  ChestX-ray: Hospital-Scale Chest X-ray Database and Benchmarks on Weakly Supervised Classification and Localization of Common Thorax Diseases , 2019, Deep Learning and Convolutional Neural Networks for Medical Imaging and Clinical Informatics.

[5]  Cuiling Lan,et al.  Generalizing to Unseen Domains: A Survey on Domain Generalization , 2021, IEEE Transactions on Knowledge and Data Engineering.

[6]  Anne L. Martel,et al.  Self supervised contrastive learning for digital histopathology , 2020, Machine Learning with Applications.

[7]  Sharon D. Solomon,et al.  Detection of diabetic foveal edema: contact lens biomicroscopy compared with optical coherence tomography. , 2004, Archives of ophthalmology.

[8]  Xinlei Chen,et al.  Exploring Simple Siamese Representation Learning , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  Kaiming He,et al.  Exploring the Limits of Weakly Supervised Pretraining , 2018, ECCV.

[10]  Armand Joulin,et al.  Self-supervised Pretraining of Visual Features in the Wild , 2021, ArXiv.

[11]  Julien Mairal,et al.  Unsupervised Learning of Visual Features by Contrasting Cluster Assignments , 2020, NeurIPS.

[12]  S. Saria,et al.  The Clinician and Dataset Shift in Artificial Intelligence. , 2021, The New England journal of medicine.

[13]  Max Welling,et al.  Attention-based Deep Multiple Instance Learning , 2018, ICML.

[14]  Benjamin Recht,et al.  When Robustness Doesn’t Promote Robustness: Synthetic vs. Natural Distribution Shifts on ImageNet , 2019 .

[15]  Stella X. Yu,et al.  Unsupervised Feature Learning via Non-parametric Instance Discrimination , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[16]  Gregory Shakhnarovich,et al.  Colorization as a Proxy Task for Visual Understanding , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[17]  Pascal Vincent,et al.  Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Shekoofeh Azizi,et al.  Big Self-Supervised Models Advance Medical Image Classification , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[19]  Hassan M. Ahmad,et al.  Effect of a comprehensive deep-learning model on the accuracy of chest x-ray interpretation by radiologists: a retrospective, multireader multicase study. , 2021, The Lancet. Digital health.

[20]  Geoffrey E. Hinton,et al.  Visualizing Data using t-SNE , 2008 .

[21]  Yang You,et al.  Large Batch Training of Convolutional Networks , 2017, 1708.03888.

[22]  Andrew Y. Ng,et al.  Contrastive learning of heart and lung sounds for label-efficient diagnosis , 2021, Patterns.

[23]  Eduardo Valle,et al.  Knowledge transfer for melanoma screening with deep learning , 2017, 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017).

[24]  Michael B. Gotway,et al.  A Systematic Benchmarking Analysis of Transfer Learning for Medical Image Analysis , 2021, DART/FAIR@MICCAI.

[25]  Hongming Shan,et al.  Dual Network Architecture for Few-view CT - Trained on ImageNet Data and Transferred for Medical Imaging , 2019, Developments in X-Ray Tomography XII.

[26]  Jacob Goldberger,et al.  Classification and Detection in Mammograms With Weak Supervision Via Dual Branch Deep Neural Net , 2019, 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019).

[27]  Nan Wu,et al.  Deep Neural Networks Improve Radiologists’ Performance in Breast Cancer Screening , 2019, IEEE Transactions on Medical Imaging.

[28]  Gianni Virgili,et al.  Optical coherence tomography (OCT) for detection of macular oedema in patients with diabetic retinopathy. , 2015, The Cochrane database of systematic reviews.

[29]  Jianming Liang,et al.  CAiD: Context-Aware Instance Discrimination for Self-supervised Learning in Medical Imaging , 2022, MIDL.

[30]  S. Gelly,et al.  An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale , 2020, ICLR.

[31]  Daniel L. Rubin,et al.  Cross-Modal Data Programming Enables Rapid Medical Machine Learning , 2019, Patterns.

[32]  Mohammed A. Fadhel,et al.  Optimizing the Performance of Breast Cancer Classification by Employing the Same Domain Transfer Learning from Hybrid Deep Convolutional Neural Network Model , 2020, Electronics.

[33]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[34]  Geoffrey E. Hinton,et al.  Self-organizing neural network that discovers surfaces in random-dot stereograms , 1992, Nature.

[35]  Yuan Zhang,et al.  FocalMix: Semi-Supervised Learning for 3D Medical Image Detection , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[36]  A. Ng,et al.  Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists , 2018, PLoS medicine.

[37]  Geoffrey E. Hinton,et al.  A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.

[38]  M. Lungren,et al.  Preparing Medical Imaging Data for Machine Learning. , 2020, Radiology.

[39]  Pierre H. Richemond,et al.  Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning , 2020, NeurIPS.

[40]  J. A. Watters,et al.  Screening for Diabetic Retinopathy: The wide-angle retinal camera , 1993, Diabetes Care.

[41]  S. Taylor-Phillips,et al.  Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy , 2021, BMJ.

[42]  Geoffrey E. Hinton,et al.  Big Self-Supervised Models are Strong Semi-Supervised Learners , 2020, NeurIPS.

[43]  Quoc V. Le,et al.  EfficientDet: Scalable and Efficient Object Detection , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[44]  Jared Dunnmon,et al.  Multi-task weak supervision enables anatomically-resolved abnormality detection in whole-body FDG-PET/CT , 2021, Nature Communications.

[45]  Behnam Neyshabur,et al.  The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning , 2021, Trans. Mach. Learn. Res..

[46]  Ioannis Mitliagkas,et al.  Adversarial target-invariant representation learning for domain generalization , 2019, ArXiv.

[47]  B. Schölkopf,et al.  On the Transfer of Disentangled Representations in Realistic Settings , 2020, ICLR.

[48]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[49]  Alexei A. Efros,et al.  What makes ImageNet good for transfer learning? , 2016, ArXiv.

[50]  Charles Blundell,et al.  Representation Learning via Invariant Causal Mechanisms , 2020, ICLR.

[51]  Jaime G. Carbonell,et al.  Characterizing and Avoiding Negative Transfer , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[52]  A. Ng,et al.  MoCo-CXR: MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models , 2020, 2010.05352.

[53]  Ellery Wulczyn,et al.  Interpretable survival prediction for colorectal cancer using deep learning , 2020, npj Digital Medicine.

[54]  Behnam Neyshabur,et al.  What is being transferred in transfer learning? , 2020, NeurIPS.

[55]  Subhashini Venugopalan,et al.  Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. , 2016, JAMA.

[56]  Zhitang Chen,et al.  CausalVAE: Disentangled Representation Learning via Neural Structural Causal Models , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[57]  Mohammed A. Fadhel,et al.  Towards a Better Understanding of Transfer Learning for Medical Imaging: A Case Study , 2020, Applied Sciences.

[58]  Ertunc Erdil,et al.  Contrastive learning of global and local features for medical image segmentation with limited annotations , 2020, NeurIPS.

[59]  Jaime S. Cardoso,et al.  Elastic deformations for data augmentation in breast cancer mass detection , 2018, 2018 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI).

[60]  Andrew Zisserman,et al.  Multi-task Self-Supervised Visual Learning , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[61]  R. Hofmann-Wellenhof,et al.  Association Between Surgical Skin Markings in Dermoscopic Images and Diagnostic Performance of a Deep Learning Convolutional Neural Network for Melanoma Recognition. , 2019, JAMA dermatology.

[62]  Yujiu Yang,et al.  Self-supervised Feature Learning for 3D Medical Images by Playing a Rubik's Cube , 2019, MICCAI.

[63]  Mark Chen,et al.  Language Models are Few-Shot Learners , 2020, NeurIPS.

[64]  Michael I. Jordan,et al.  On the Theory of Transfer Learning: The Importance of Task Diversity , 2020, NeurIPS.

[65]  Ilya Sutskever,et al.  Learning Transferable Visual Models From Natural Language Supervision , 2021, ICML.

[66]  Alexander D'Amour,et al.  Underspecification Presents Challenges for Credibility in Modern Machine Learning , 2020, J. Mach. Learn. Res..

[67]  Bernhard Schölkopf,et al.  Domain Generalization via Invariant Feature Representation , 2013, ICML.

[68]  Anne L. Martel,et al.  Self-supervised driven consistency training for annotation efficient histopathology image analysis , 2021, Medical Image Anal..

[69]  Timo Dickscheid,et al.  Improving Cytoarchitectonic Segmentation of Human Brain Areas with Self-supervised Siamese Networks , 2018, MICCAI.

[70]  Alexei A. Efros,et al.  Context Encoders: Feature Learning by Inpainting , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[71]  Pietro Perona,et al.  One-shot learning of object categories , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[72]  Sadegh Mohammadi,et al.  How Transferable Are Self-supervised Features in Medical Image Classification Tasks? , 2021, ML4H@NeurIPS.

[73]  A. Madabhushi,et al.  Artificial intelligence in digital pathology — new tools for diagnosis and precision oncology , 2019, Nature Reviews Clinical Oncology.

[74]  Chen Sun,et al.  Revisiting Unreasonable Effectiveness of Data in Deep Learning Era , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[75]  Mark Sellke,et al.  A Universal Law of Robustness via Isoperimetry , 2021, ArXiv.

[76]  L. Oakden-Rayner,et al.  Replication of an open-access deep learning system for screening mammography: Reduced performance mitigated by retraining on local data , 2021, medRxiv.

[77]  Rainer Hofmann-Wellenhof,et al.  A deep learning system for differential diagnosis of skin diseases , 2019, Nature Medicine.

[78]  Yoshua Bengio,et al.  Towards Causal Representation Learning , 2021, ArXiv.

[79]  Donald A. Adjeroh,et al.  Unified Deep Supervised Domain Adaptation and Generalization , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[80]  Jong Wook Kim,et al.  Robust fine-tuning of zero-shot models , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[81]  B. van Ginneken,et al.  Artificial intelligence in radiology: 100 commercially available products and their scientific evidence , 2021, European Radiology.

[82]  Pushmeet Kohli,et al.  Contrastive Training for Improved Out-of-Distribution Detection , 2020, ArXiv.

[83]  Dawn Song,et al.  Pretrained Transformers Improve Out-of-Distribution Robustness , 2020, ACL.

[84]  R Devon Hjelm,et al.  Learning Representations by Maximizing Mutual Information Across Views , 2019, NeurIPS.

[85]  Kai Ma,et al.  Rubik's Cube+: A self-supervised feature learning framework for 3D medical image analysis , 2020, Medical Image Anal..

[86]  Dinggang Shen,et al.  Domain Generalization for Mammography Detection via Multi-style and Multi-view Contrastive Learning , 2021, MICCAI.

[87]  Finale Doshi-Velez,et al.  The myth of generalisability in clinical research and machine learning in health care , 2020, The Lancet Digital Health.

[88]  Anne L. Martel,et al.  Improving Self-supervised Learning with Hardness-aware Dynamic Curriculum Learning: An Application to Digital Pathology , 2021, 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW).

[89]  Kimin Lee,et al.  Using Pre-Training Can Improve Model Robustness and Uncertainty , 2019, ICML.

[90]  Nikos Komodakis,et al.  Unsupervised Representation Learning by Predicting Image Rotations , 2018, ICLR.

[91]  David S. Melnick,et al.  International evaluation of an AI system for breast cancer screening , 2020, Nature.

[92]  Matthieu Cord,et al.  Training data-efficient image transformers & distillation through attention , 2020, ICML.

[93]  Ross B. Girshick,et al.  Momentum Contrast for Unsupervised Visual Representation Learning , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[94]  Thomas J. Fuchs,et al.  Clinical-grade computational pathology using weakly supervised deep learning on whole slide images , 2019, Nature Medicine.

[95]  Geraint Rees,et al.  Clinically applicable deep learning for diagnosis and referral in retinal disease , 2018, Nature Medicine.

[96]  A. Kadambi Achieving fairness in medical devices , 2021, Science.

[97]  Chen Change Loy,et al.  Domain Generalization: A Survey , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[98]  Andrew H. Beck,et al.  Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer , 2017, JAMA.

[99]  Andrea Vedaldi,et al.  PASS: An ImageNet replacement for self-supervised pretraining without humans , 2021, NeurIPS Datasets and Benchmarks.

[100]  Po-Hsuan Cameron Chen,et al.  Current and future applications of artificial intelligence in pathology: a clinical perspective , 2020, Journal of Clinical Pathology.

[101]  S. Kido,et al.  Anatomy-aware self-supervised learning for anomaly detection in chest radiographs , 2022, iScience.

[102]  Henning Müller,et al.  Visualizing and interpreting feature reuse of pretrained CNNs for histopathology , 2019 .

[103]  Joel S Schuman,et al.  Automated detection of clinically significant macular edema by grid scanning optical coherence tomography. , 2006, Ophthalmology.

[104]  Richard S. Sutton,et al.  Generalization in ReinforcementLearning : Successful Examples UsingSparse Coarse , 1996 .

[105]  Jon Kleinberg,et al.  Transfusion: Understanding Transfer Learning for Medical Imaging , 2019, NeurIPS.

[106]  E. Pierson,et al.  An algorithmic approach to reducing unexplained pain disparities in underserved populations , 2021, Nature Medicine.

[107]  Steven Horng,et al.  MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports , 2019, Scientific Data.

[108]  Thao Nguyen,et al.  Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth , 2020, ICLR.

[109]  Subhashini Venugopalan,et al.  Predicting optical coherence tomography-derived diabetic macular edema grades from fundus photographs using deep learning , 2018, Nature Communications.

[110]  Liang Chen,et al.  Self-supervised learning for medical image analysis using image context restoration , 2019, Medical Image Anal..

[111]  David A. Cohn,et al.  Improving generalization with active learning , 1994, Machine Learning.

[112]  Sebastian Thrun,et al.  Dermatologist-level classification of skin cancer with deep neural networks , 2017, Nature.

[113]  Shih-Fu Chang,et al.  Unsupervised Embedding Learning via Invariant and Spreading Instance Feature , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[114]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[115]  Mei Wang,et al.  Deep Visual Domain Adaptation: A Survey , 2018, Neurocomputing.

[116]  Pong C. Yuen,et al.  Multi-Adversarial Discriminative Deep Domain Generalization for Face Presentation Attack Detection , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[117]  Mustafa Suleyman,et al.  Key challenges for delivering clinical impact with artificial intelligence , 2019, BMC Medicine.

[118]  Neil Houlsby,et al.  Supervised Transfer Learning at Scale for Medical Imaging , 2021, ArXiv.

[119]  Christian Etmann,et al.  Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans , 2020 .

[120]  Alexei A. Efros,et al.  Unsupervised Visual Representation Learning by Context Prediction , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[121]  G. Corrado,et al.  Deep learning to detect optical coherence tomography-derived diabetic macular edema from retinal photographs: a multicenter validation study. , 2022, Ophthalmology. Retina.