Adversarial Attack Vulnerability of Medical Image Analysis Systems: Unexplored Factors
暂无分享,去创建一个
Mitko Veta | Bram van Ginneken | Laurens Hogeweg | Ioannis Katramados | Florian Dubost | Bart Liefers | Gerda Bortsova | Suzanne C. Wetstein | Clara I. S'anchez | Josien P.W. Pluim | Marleen de Bruijne | Cristina Gonz'alez-Gonzalo | M. Veta | B. Ginneken | J. Pluim | Gerda Bortsova | Florian Dubost | L. Hogeweg | I. Katramados | B. Liefers | S. C. Wetstein | Cristina Gonz'alez-Gonzalo | C. S'anchez | Ioannis Katramados | S. Wetstein | G. Bortsova | C. González-Gonzalo | Clara I. Sánchez
[1] Pan He,et al. Adversarial Examples: Attacks and Defenses for Deep Learning , 2017, IEEE Transactions on Neural Networks and Learning Systems.
[2] Yarin Gal,et al. Dropout Inference in Bayesian Neural Networks with Alpha-divergences , 2017, ICML.
[3] Ronald M. Summers,et al. ChestX-ray: Hospital-Scale Chest X-ray Database and Benchmarks on Weakly Supervised Classification and Localization of Common Thorax Diseases , 2019, Deep Learning and Convolutional Neural Networks for Medical Imaging and Clinical Informatics.
[4] Dorin Comaniciu,et al. Learning to recognize Abnormalities in Chest X-Rays with Location-Aware Dense Networks , 2018, CIARP.
[5] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[6] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[7] Hayit Greenspan,et al. An Adversarial Learning Approach to Medical Image Synthesis for Lesion Detection , 2018, IEEE Journal of Biomedical and Health Informatics.
[8] M. Abràmoff,et al. Improved Automated Detection of Diabetic Retinopathy on a Publicly Available Dataset Through Integration of Deep Learning. , 2016, Investigative ophthalmology & visual science.
[9] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[10] Mark Button,et al. The Financial Cost of Healthcare Fraud 2015: What Data from Around the World Shows , 2010 .
[11] James Bailey,et al. Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems , 2019, Pattern Recognit..
[12] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Max Welling,et al. Rotation Equivariant CNNs for Digital Pathology , 2018, MICCAI.
[14] Bram van Ginneken,et al. A survey on deep learning in medical image analysis , 2017, Medical Image Anal..
[15] James Bailey,et al. Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets , 2020, ICLR.
[16] Marco Eichelberg,et al. Cybersecurity in PACS and Medical Imaging: an Overview , 2020, Journal of Digital Imaging.
[17] Xiangyu Zhang,et al. Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples , 2018, NeurIPS.
[18] Andrew Y. Ng,et al. CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning , 2017, ArXiv.
[19] Ara Darzi,et al. The challenges of cybersecurity in health care: the UK National Health Service as a case study. , 2019, The Lancet. Digital health.
[20] B. van Ginneken,et al. Automated deep-learning system for Gleason grading of prostate cancer using biopsies: a diagnostic study. , 2020, The Lancet. Oncology.
[21] D. Chandler. Seven Challenges in Image Quality Assessment: Past, Present, and Future Research , 2013 .
[22] Wesley De Neve,et al. Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation , 2019, MICCAI.
[23] Ghassan Hamarneh,et al. Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks , 2018, MLCN/DLF/iMIMIC@MICCAI.
[24] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[26] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Fabio Roli,et al. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2018, CCS.
[28] Subhashini Venugopalan,et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. , 2016, JAMA.
[29] B. van Ginneken,et al. Computer aided detection of tuberculosis on chest radiographs: An evaluation of the CAD4TB v6 system , 2019, Scientific Reports.
[30] Yarin Gal,et al. Understanding Measures of Uncertainty for Adversarial Example Detection , 2018, UAI.
[31] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[32] Allison M. Onken,et al. Deep learning assessment of breast terminal duct lobular unit involution: Towards automated prediction of breast cancer risk , 2019, PloS one.
[33] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[34] Ender Konukoglu,et al. Injecting and removing suspicious features in breast imaging with CycleGAN: A pilot study of automated adversarial attacks using neural networks on small images. , 2019, European journal of radiology.
[35] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[36] Ting Wang,et al. Interpretable Deep Learning under Fire , 2018, USENIX Security Symposium.
[37] M. Lenaz. Health-care fraud and abuse. , 2009, Connecticut medicine.
[38] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[39] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[40] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[41] Jinfeng Yi,et al. Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models , 2018, ECCV.
[42] M. Abràmoff,et al. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices , 2018, npj Digital Medicine.
[43] David A. Forsyth,et al. SafetyNet: Detecting and Rejecting Adversarial Examples Robustly , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[44] Andrew L. Beam,et al. Adversarial Attacks Against Medical Deep Learning Systems , 2018, ArXiv.
[45] William J Rudman,et al. Healthcare fraud and abuse. , 2009, Perspectives in health information management.
[46] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[47] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[48] Ender Konukoglu,et al. Visual Feature Attribution Using Wasserstein GANs , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[49] Kouichi Sakurai,et al. One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.
[50] Kimin Lee,et al. Using Pre-Training Can Improve Model Robustness and Uncertainty , 2019, ICML.
[51] Andrew L. Beam,et al. Adversarial attacks on medical machine learning , 2019, Science.
[52] Ajmal Mian,et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.
[53] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[54] E. Finkelstein,et al. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations With Diabetes , 2017, JAMA.
[55] Andrew H. Beck,et al. Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer , 2017, JAMA.
[56] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[57] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[58] Oleg S. Pianykh,et al. How Secure Is Your Radiology Department? Mapping Digital Radiology Adoption and Security Worldwide. , 2016, AJR. American journal of roentgenology.
[59] Nicholas Carlini,et al. Is AmI (Attacks Meet Interpretability) Robust to Adversarial Examples? , 2019, ArXiv.
[60] Yara T. E. Lechanteur,et al. Evaluation of a deep learning system for the joint automated detection of diabetic retinopathy and age‐related macular degeneration , 2019, Acta ophthalmologica.
[61] B. van Ginneken,et al. Computer aided detection of tuberculosis on chest radiographs: An evaluation of the CAD4TB v6 system , 2019, Scientific Reports.
[62] Ara Darzi,et al. Cybersecurity and healthcare: how safe are we? , 2017, British Medical Journal.
[63] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[64] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[65] Yang Song,et al. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples , 2017, ICLR.
[66] Eero P. Simoncelli,et al. Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.
[67] S. Tsaftaris,et al. Pseudo-healthy synthesis with pathology disentanglement and adversarial learning , 2020, Medical Image Anal..
[68] Pushmeet Kohli,et al. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks , 2018, ICML.
[69] Nassir Navab,et al. Generalizability vs. Robustness: Adversarial Examples for Medical Imaging , 2018, ArXiv.