Adversarial explanations for understanding image classification decisions and improved neural network robustness
暂无分享,去创建一个
[1] Bolei Zhou,et al. Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[2] J. Zico Kolter,et al. Scaling provable adversarial defenses , 2018, NeurIPS.
[3] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[4] Andrew L. Beam,et al. Adversarial attacks on medical machine learning , 2019, Science.
[5] Moustapha Cissé,et al. Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.
[6] K. Doi,et al. Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists' detection of pulmonary nodules. , 2000, AJR. American journal of roentgenology.
[7] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[8] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[9] Raquel Urtasun,et al. Understanding the Effective Receptive Field in Deep Convolutional Neural Networks , 2016, NIPS.
[10] Takuya Akiba,et al. Shakedrop Regularization for Deep Residual Learning , 2018, IEEE Access.
[11] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[12] Yao Zhao,et al. Adversarial Attacks and Defences Competition , 2018, ArXiv.
[13] Jinfeng Yi,et al. Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach , 2018, ICLR.
[14] Dylan Hadfield-Menell,et al. On the Geometry of Adversarial Examples , 2018, ArXiv.
[15] Philip H. S. Torr,et al. Learn To Pay Attention , 2018, ICLR.
[16] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[17] Kilian Q. Weinberger,et al. Deep Networks with Stochastic Depth , 2016, ECCV.
[18] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[19] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Luiz Eduardo Soares de Oliveira,et al. Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[21] Chandan Singh,et al. Definitions, methods, and applications in interpretable machine learning , 2019, Proceedings of the National Academy of Sciences.
[22] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[23] David Duvenaud,et al. Invertible Residual Networks , 2018, ICML.
[24] Bolei Zhou,et al. Visualizing and Understanding Generative Adversarial Networks (Extended Abstract) , 2019, ArXiv.
[25] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[26] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[27] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[28] Bolei Zhou,et al. GAN Dissection: Visualizing and Understanding Generative Adversarial Networks , 2018, ICLR.
[29] Jack Stilgoe,et al. Machine learning, social learning and the governance of self-driving cars , 2017, Social studies of science.
[30] Xiaolin Hu,et al. Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[31] Zhi Zhang,et al. Bag of Tricks for Image Classification with Convolutional Neural Networks , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[32] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[33] Debdeep Mukhopadhyay,et al. Adversarial Attacks and Defences: A Survey , 2018, ArXiv.
[34] Ting Liu,et al. Attention-over-Attention Neural Networks for Reading Comprehension , 2016, ACL.
[35] Will Landecker,et al. Interpretable machine learning and sparse coding for computer vision , 2014 .
[36] Junfeng Yang,et al. DeepXplore: Automated Whitebox Testing of Deep Learning Systems , 2017, SOSP.
[37] David J. Field,et al. Emergence of simple-cell receptive field properties by learning a sparse code for natural images , 1996, Nature.
[38] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[39] Jinfeng Yi,et al. Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models , 2018, ECCV.
[40] Bernt Schiele,et al. Disentangling Adversarial Robustness and Generalization , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[41] Luca Antiga,et al. Automatic differentiation in PyTorch , 2017 .
[42] Seunghoon Hong,et al. Online Tracking by Learning Discriminative Saliency Map with Convolutional Neural Network , 2015, ICML.
[43] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[44] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[45] Yun Fu,et al. Tell Me Where to Look: Guided Attention Inference Network , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[46] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[47] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[48] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[49] Emily Chia-Yu Su,et al. Predicting diabetic retinopathy and identifying interpretable biomedical features using machine learning algorithms , 2018, BMC Bioinformatics.
[50] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[51] Abubakar Abid,et al. Interpretation of Neural Networks is Fragile , 2017, AAAI.
[52] Chandan Singh,et al. Definitions, methods, and applications in interpretable machine learning , 2019, Proceedings of the National Academy of Sciences.
[53] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[54] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.