Increasing-Margin Adversarial (IMA) Training to Improve Adversarial Robustness of Neural Networks

Convolutional neural network (CNN) has surpassed traditional methods for med-ical image classification. However, CNN is vulnerable to adversarial attacks which may lead to disastrous consequences in medical applications. Although adversarial noises are usually generated by attack algorithms, white-noise-induced adversarial samples can exist, and therefore the threats are real. In this study, we propose a novel training method, named IMA, to improve the robust-ness of CNN against adversarial noises. During training, the IMA method in-creases the margins of training samples in the input space, i.e., moving CNN de-cision boundaries far away from the training samples to improve robustness. The IMA method is evaluated on four publicly available datasets under strong 100-PGD white-box adversarial attacks, and the results show that the proposed meth-od significantly improved CNN classification accuracy on noisy data while keep-ing a relatively high accuracy on clean data. We hope our approach may facilitate the development of robust applications in medical field.

[1]  Ruitong Huang,et al.  Max-Margin Adversarial (MMA) Training: Direct Input Space Margin Maximization through Adversarial Training , 2018, ICLR.

[2]  Chun-Nam Yu,et al.  A Direct Approach to Robust Deep Learning Using Adversarial Networks , 2019, ICLR.

[3]  Terrance E. Boult,et al.  Assessing Threat of Adversarial Examples on Deep Neural Networks , 2016, 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA).

[4]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[5]  Michael I. Jordan,et al.  Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.

[6]  Pushmeet Kohli,et al.  Adversarial Risk and the Dangers of Evaluating Against Weak Attacks , 2018, ICML.

[7]  Luiz Eduardo Soares de Oliveira,et al.  Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Samy Bengio,et al.  Adversarial examples in the physical world , 2016, ICLR.

[9]  Luyu Wang,et al.  advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch , 2019, ArXiv.

[10]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Arun Ross,et al.  Soft biometric privacy: Retaining biometric utility of face images while perturbing gender , 2017, 2017 IEEE International Joint Conference on Biometrics (IJCB).

[12]  Ling Liu,et al.  Some Investigations on Robustness of Deep Learning in Limited Angle Tomography , 2018, MICCAI.

[13]  Zhengrong Liang,et al.  An experimental study on the noise properties of x-ray CT sinogram data in Radon space , 2008, Physics in medicine and biology.

[14]  Dinggang Shen,et al.  Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation, and Diagnosis for COVID-19 , 2020, IEEE Reviews in Biomedical Engineering.

[15]  Xiaolin Hu,et al.  Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[16]  Plamen Angelov,et al.  SARS-CoV-2 CT-scan dataset:A large dataset of real patients CT scans for SARS-CoV-2 identification , 2020 .

[17]  Wesley De Neve,et al.  Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation , 2019, MICCAI.

[18]  Daniel S. Kermany,et al.  Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning , 2018, Cell.

[19]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[20]  Majid Sarrafzadeh,et al.  ECG Heartbeat Classification: A Deep Transferable Representation , 2018, 2018 IEEE International Conference on Healthcare Informatics (ICHI).

[21]  Qiong Zhang,et al.  Improving Robustness of Medical Image Diagnosis with Denoising Convolutional Neural Networks , 2019, MICCAI.

[22]  Aleksander Madry,et al.  On Adaptive Attacks to Adversarial Example Defenses , 2020, NeurIPS.

[23]  Alejandro F. Frangi,et al.  Medical Image Computing and Computer Assisted Intervention – MICCAI 2018 , 2018, Lecture Notes in Computer Science.

[24]  Luca Antiga,et al.  Automatic differentiation in PyTorch , 2017 .

[25]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[26]  Nassir Navab,et al.  Generalizability vs. Robustness: Adversarial Examples for Medical Imaging , 2018, MICCAI.

[27]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[28]  Ajmal Mian,et al.  Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.

[29]  A. F. Adams,et al.  The Survey , 2021, Dyslexia in Higher Education.

[30]  Atul Prakash,et al.  Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[31]  Tom Goldstein,et al.  Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets , 2019, ArXiv.