In this paper, we study dependence of the success rate of adversarial attacks to the Deep Neural Networks on the biomedical image type, control parameters, and image dataset size. With this work, we are going to contribute towards accumulation of experimental results on adversarial attacks for the community dealing with biomedical images. The white-box Projected Gradient Descent attacks were examined based on 8 classification tasks and 13 image datasets containing a total of 605,080 chest X-ray and 317,000 histology images of malignant tumors. We concluded that: (1) An increase of the amplitude of perturbation in generating malicious adversarial images leads to a growth of the fraction of successful attacks for the majority of image types examined in this study. (2) Histology images tend to be less sensitive to the growth of amplitude of adversarial perturbations. (3) Percentage of successful attacks is growing with an increase of the number of iterations of the algorithm of generating adversarial perturbations with an asymptotic stabilization. (4) It was found that the success of attacks dropping dramatically when the original confidence of predicting image class exceeds 0.95. (5) The expected dependence of the percentage of successful attacks on the size of image training set was not confirmed.
[1]
Mesut Ozdag,et al.
Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey
,
2018
.
[2]
Ajmal Mian,et al.
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
,
2018,
IEEE Access.
[3]
Lipo Wang,et al.
Deep Learning Applications in Medical Image Analysis
,
2018,
IEEE Access.
[4]
Aleksander Madry,et al.
Towards Deep Learning Models Resistant to Adversarial Attacks
,
2017,
ICLR.
[5]
Ananthram Swami,et al.
The Limitations of Deep Learning in Adversarial Settings
,
2015,
2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[6]
Karl Rohr,et al.
Predicting breast tumor proliferation from whole‐slide images: The TUPAC16 challenge
,
2018,
Medical Image Anal..
[7]
Jonathon Shlens,et al.
Explaining and Harnessing Adversarial Examples
,
2014,
ICLR.
[8]
Joan Bruna,et al.
Intriguing properties of neural networks
,
2013,
ICLR.
[9]
Bram van Ginneken,et al.
A survey on deep learning in medical image analysis
,
2017,
Medical Image Anal..