Improving Robustness of Medical Image Diagnosis with Denoising Convolutional Neural Networks

Convolutional neural networks (CNNs) are vulnerable to adversarial noises, which may result in potentially disastrous consequences in safety or security sensitive systems. This paper proposes a novel mechanism to improve the robustness of medical image classification systems by bringing denoising ability to CNN classifiers with a naturally embedded auto-encoder and high-level feature invariance to general noises. This novel denoising mechanism can be adapted to many model architectures, and therefore can be easily combined with existing models and denoising mechanisms to further improve robustness of CNN classifiers. This proposed method has been confirmed by comprehensive evaluations with two medical image classification tasks.

[1]  Nassir Navab,et al.  Generalizability vs. Robustness: Adversarial Examples for Medical Imaging , 2018, ArXiv.

[2]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[3]  Hao Chen,et al.  MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.

[4]  Sebastian Thrun,et al.  Dermatologist-level classification of skin cancer with deep neural networks , 2017, Nature.

[5]  Samy Bengio,et al.  Adversarial Machine Learning at Scale , 2016, ICLR.

[6]  Noel C. F. Codella,et al.  Skin lesion analysis toward melanoma detection: A challenge at the 2017 International symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC) , 2016, 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018).

[7]  Dan Boneh,et al.  Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.

[8]  Jian Liu,et al.  Defense Against Universal Adversarial Perturbations , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[9]  M. Mohammed Thaha,et al.  Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images , 2019, Journal of Medical Systems.

[10]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Ananthram Swami,et al.  Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).

[12]  Nassir Navab,et al.  Generalizability vs. Robustness: Adversarial Examples for Medical Imaging , 2018, MICCAI.

[13]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[14]  Xiaolin Hu,et al.  Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[15]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[16]  Samy Bengio,et al.  Adversarial examples in the physical world , 2016, ICLR.