Anatomical context protects deep learning from adversarial perturbations in medical imaging

Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks are susceptible to small adversarial perturbations in the image. We study the impact of such adversarial perturbations in medical image processing where the goal is to predict an individual's age based on a 3D MRI brain image. We consider two models: a conventional deep neural network, and a hybrid deep learning model which additionally uses features informed by anatomical context. We find that we can introduce significant errors in predicted age by adding imperceptible noise to an image, can accomplish this even for large batches of images using a single perturbation, and that the hybrid model is much more robust to adversarial perturbations than the conventional deep neural network. Our work highlights limitations of current deep learning techniques in clinical applications, and suggests a path forward.

[1]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[2]  Giovanni Montana,et al.  Predicting brain age with deep learning from raw imaging data results in a reliable and heritable biomarker , 2016, NeuroImage.

[3]  Ghassan Hamarneh,et al.  Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks , 2018, MLCN/DLF/iMIMIC@MICCAI.

[4]  Fintan J. McEvoy,et al.  Security of Patient and Study Data Associated with DICOM Images when Transferred Using Compact Disc Media , 2009, Journal of Digital Imaging.

[5]  Bennett A. Landman,et al.  Hierarchical performance estimation in the statistical label fusion framework , 2014, Medical Image Anal..

[6]  Sebastian Thrun,et al.  Dermatologist-level classification of skin cancer with deep neural networks , 2017, Nature.

[7]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[8]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[9]  Jenna Burrell,et al.  How the machine ‘thinks’: Understanding opacity in machine learning algorithms , 2016 .

[10]  Andrew H. Beck,et al.  Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer , 2017, JAMA.

[11]  Subhashini Venugopalan,et al.  Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. , 2016, JAMA.

[12]  Bo Li,et al.  Evasion-Robust Classification on Binary Domains , 2018, ACM Trans. Knowl. Discov. Data.

[13]  Samy Bengio,et al.  Adversarial Machine Learning at Scale , 2016, ICLR.

[14]  Jason Yosinski,et al.  Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  Camilo Bermudez,et al.  Anatomical context improves deep learning on the brain age estimation task. , 2019, Magnetic resonance imaging.

[16]  Nassir Navab,et al.  Generalizability vs. Robustness: Adversarial Examples for Medical Imaging , 2018, MICCAI.

[17]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[18]  Andrew L. Beam,et al.  Adversarial attacks on medical machine learning , 2019, Science.

[19]  Xin Wang,et al.  Cancer Metastasis Detection via Spatially Structured Deep Network , 2017, IPMI.

[20]  Andrew L. Beam,et al.  Adversarial Attacks Against Medical Deep Learning Systems , 2018, ArXiv.

[21]  Shu Liao,et al.  Multi-Instance Deep Learning: Discover Discriminative Local Anatomies for Bodypart Recognition , 2016, IEEE Transactions on Medical Imaging.