Physical Attacks in Dermoscopy: An Evaluation of Robustness for clinical Deep-Learning
暂无分享,去创建一个
Anirban Mukhopadhyay | Arjan Kuijper | David Kügler | Johannes Fauser | Salome Kazeminia | Thomas Vogl | Markus Meissner | Alexander Distergoft | Marc Uecker | Andreas M. Bucher | Johannes Kleemann | Ali Jabhe | Daniel Alte | Angeelina Rajkarnikar | Tobias Weberschock
[1] R. Kaplan,et al. Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time , 2015, PloS one.
[2] M. Jalali,et al. Cybersecurity in Hospitals: A Systematic, Organizational Perspective , 2018, Journal of medical Internet research.
[3] Dawn Song,et al. Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.
[4] Alain Pitiot,et al. Fusing fine-tuned deep features for skin lesion classification , 2019, Comput. Medical Imaging Graph..
[5] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Yao Zhao,et al. Adversarial Attacks and Defences Competition , 2018, ArXiv.
[7] B. Lowell,et al. Dermatology in primary care: Prevalence and patient disposition. , 2001, Journal of the American Academy of Dermatology.
[8] P E Kalb,et al. Health care fraud and abuse. , 1999, JAMA.
[9] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[10] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[11] François Chollet,et al. Xception: Deep Learning with Depthwise Separable Convolutions , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Sergey Ioffe,et al. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.
[13] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[14] David A. Forsyth,et al. NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles , 2017, ArXiv.
[15] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Sebastian Thrun,et al. Dermatologist-level classification of skin cancer with deep neural networks , 2017, Nature.
[17] Nassir Navab,et al. Generalizability vs. Robustness: Adversarial Examples for Medical Imaging , 2018, MICCAI.
[18] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[19] Ghassan Hamarneh,et al. Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks , 2018, MLCN/DLF/iMIMIC@MICCAI.
[20] Yarin Gal,et al. Understanding Measures of Uncertainty for Adversarial Example Detection , 2018, UAI.
[21] Noel C. F. Codella,et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 International symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC) , 2016, 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018).
[22] Harald Kittler,et al. Descriptor : The HAM 10000 dataset , a large collection of multi-source dermatoscopic images of common pigmented skin lesions , 2018 .
[23] Bo Chen,et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.
[24] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[25] Andrew L. Beam,et al. Adversarial Attacks Against Medical Deep Learning Systems , 2018, ArXiv.