Machine learning vulnerability in medical imaging

Abstract Recently, there has been increased interest in applying computer vision methodologies in medical imaging, mainly due to the outstanding performance of deep learning. However, evolution in computer vision systems has led to some matters of security. Adversarial computer vision and/or machine learning is the field that researches these matters, producing either adversarial attack proposals or defensive strategies and techniques against them. This chapter aims at capturing the status of medical computer vision threats and the defensive techniques that are being proposed by researchers. This chapter intends to shed light on the vulnerability of machine learning models in medical image analysis, e.g., disease diagnosis, and to serve as a guide for any researcher working in the field of medical image analysis towards the development of more secure machine learning-based computer-aided diagnosis systems.

[1]  Martin Wistuba,et al.  Adversarial Robustness Toolbox v1.0.0 , 2018, 1807.01069.

[2]  Zenghui Wang,et al.  Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review , 2017, Neural Computation.

[3]  Xin Li,et al.  Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[4]  G. A. Papakostas,et al.  Adversarial computer vision: a current snapshot , 2020, International Conference on Machine Vision.

[5]  Jinfeng Yi,et al.  ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.

[6]  Ananthram Swami,et al.  Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).

[7]  Basil G. Mertzios,et al.  Performance of the Orthogonal Moments in Reconstructing Biomedical Images , 2009, 2009 16th International Conference on Systems, Signals and Image Processing.

[8]  Bin Sheng,et al.  Automatic Grading System for Diabetic Retinopathy Diagnosis Using Deep Learning Artificial Intelligence Software , 2020, Current eye research.

[9]  Dimitris E. Koulouriotis,et al.  Accurate reconstruction of noisy medical images using orthogonal moments , 2013, 2013 18th International Conference on Digital Signal Processing (DSP).

[10]  Jorge Nocedal,et al.  On the limited memory BFGS method for large scale optimization , 1989, Math. Program..

[11]  Khalid M. Hosny,et al.  Skin Cancer Classification using Deep Learning and Transfer Learning , 2018, 2018 9th Cairo International Biomedical Engineering Conference (CIBEC).

[12]  C. Njeh,et al.  Tumor delineation: The weakest link in the search for accuracy in radiotherapy , 2008, Journal of medical physics.

[13]  G. A. Papakostas,et al.  DOME-T: adversarial computer vision attack on deep learning models based on Tchebichef image moments , 2021, International Conference on Machine Vision.

[14]  D. Chalmers Facing Up to the Problem of Consciousness , 1995 .

[15]  Pascal Vincent,et al.  Contractive Auto-Encoders: Explicit Invariance During Feature Extraction , 2011, ICML.

[16]  Patrick D. McDaniel,et al.  Cleverhans V0.1: an Adversarial Machine Learning Library , 2016, ArXiv.

[17]  P. Mildenberger,et al.  Introduction to the DICOM standard , 2002, European Radiology.

[18]  George A. Papakostas,et al.  Evaluation of LBP Variants in Retinal Blood Vessels Segmentation Using Machine Learning , 2020, 2020 International Conference on Intelligent Systems and Computer Vision (ISCV).

[19]  Hao Chen,et al.  MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.

[20]  Jascha Sohl-Dickstein,et al.  Adversarial Examples that Fool both Computer Vision and Time-Limited Humans , 2018, NeurIPS.

[21]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[22]  Ghassan Hamarneh,et al.  Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks , 2018, MLCN/DLF/iMIMIC@MICCAI.

[23]  Jinfeng Yi,et al.  EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples , 2017, AAAI.

[24]  Battista Biggio,et al.  AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack , 2016, ArXiv.

[25]  Taghi M. Khoshgoftaar,et al.  A survey of transfer learning , 2016, Journal of Big Data.

[26]  James Bailey,et al.  Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems , 2019, Pattern Recognit..

[27]  David Wagner,et al.  Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.