暂无分享,去创建一个
Wei-An Lin | Rama Chellappa | Alexander Levine | Soheil Feizi | Chun Pong Lau | Ramalingam Chellappa | Alexander Levine | S. Feizi | Wei-An Lin
[1] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[2] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[3] Rama Chellappa,et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models , 2018, ICLR.
[4] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[5] Alexei A. Efros,et al. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[6] Hamza Fawzi,et al. Adversarial vulnerability for any classifier , 2018, NeurIPS.
[7] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[8] James Bailey,et al. Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality , 2018, ICLR.
[9] Thomas Hofmann,et al. The Odds are Odd: A Statistical Test for Detecting Adversarial Examples , 2019, ICML.
[10] Jaakko Lehtinen,et al. Analyzing and Improving the Image Quality of StyleGAN , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[12] Hao Chen,et al. MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.
[13] J. Zico Kolter,et al. Fast is better than free: Revisiting adversarial training , 2020, ICLR.
[14] Tom Goldstein,et al. Certified Defenses for Adversarial Patches , 2020, ICLR.
[15] Yang Song,et al. Constructing Unrestricted Adversarial Examples with Generative Models , 2018, NeurIPS.
[16] Jun Zhu,et al. Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[17] Aleksander Madry,et al. Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.
[18] Peilin Zhong,et al. Resisting Adversarial Attacks by k-Winners-Take-All , 2019, ArXiv.
[19] Alexandros G. Dimakis,et al. The Robust Manifold Defense: Adversarial Training using Generative Models , 2017, ArXiv.
[20] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[21] Kun Xu,et al. Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks , 2020, ICLR.
[22] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[23] Bernt Schiele,et al. Disentangling Adversarial Robustness and Generalization , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Aditi Raghunathan,et al. Semidefinite relaxations for certifying robustness to adversarial examples , 2018, NeurIPS.
[25] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[26] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[27] Peter Wonka,et al. Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space? , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[28] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[29] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[30] Ross B. Girshick,et al. Fast R-CNN , 2015, 1504.08083.
[31] Jeff Donahue,et al. Large Scale GAN Training for High Fidelity Natural Image Synthesis , 2018, ICLR.
[32] Alexander Levine,et al. Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation , 2019, AAAI.
[33] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[34] Ser-Nam Lim,et al. Fine-grained Synthesis of Unrestricted Adversarial Examples , 2019, ArXiv.
[35] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[36] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[37] Yi Sun,et al. Testing Robustness Against Unforeseen Adversaries , 2019, ArXiv.
[38] Tara N. Sainath,et al. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups , 2012, IEEE Signal Processing Magazine.
[39] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[40] Gregory Cohen,et al. EMNIST: an extension of MNIST to handwritten letters , 2017, CVPR 2017.
[41] Ali Razavi,et al. Generating Diverse High-Fidelity Images with VQ-VAE-2 , 2019, NeurIPS.
[42] Soheil Feizi,et al. Functional Adversarial Attacks , 2019, NeurIPS.
[43] Soheil Feizi,et al. Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks , 2019, AISTATS.
[44] Timo Aila,et al. A Style-Based Generator Architecture for Generative Adversarial Networks , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).