Constructing Unrestricted Adversarial Examples with Generative Models
暂无分享,去创建一个
Yang Song | Stefano Ermon | Nate Kushman | Rui Shu | S. Ermon | Rui Shu | Nate Kushman | Yang Song | Stefano Ermon
[1] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[2] Ian S. Fischer,et al. Adversarial Transformation Networks: Learning to Generate Adversarial Examples , 2017, ArXiv.
[3] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] J. Nocedal. Updating Quasi-Newton Matrices With Limited Storage , 1980 .
[5] Rama Chellappa,et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models , 2018, ICLR.
[6] John C. Duchi,et al. Certifiable Distributional Robustness with Principled Adversarial Training , 2017, ArXiv.
[7] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[8] Alan L. Yuille,et al. Adversarial Examples for Semantic Segmentation and Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[9] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[10] Michael D. Buhrmester,et al. Amazon's Mechanical Turk , 2011, Perspectives on psychological science : a journal of the Association for Psychological Science.
[11] Jonathon Shlens,et al. Conditional Image Synthesis with Auxiliary Classifier GANs , 2016, ICML.
[12] Atul Prakash,et al. Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[13] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[15] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[16] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[17] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[18] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[19] Mingyan Liu,et al. Generating Adversarial Examples with Adversarial Networks , 2018, IJCAI.
[20] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[21] Lawrence D. Jackel,et al. Backpropagation Applied to Handwritten Zip Code Recognition , 1989, Neural Computation.
[22] Stefano Ermon,et al. A DIRT-T Approach to Unsupervised Domain Adaptation , 2018, ICLR.
[23] Yang Song,et al. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples , 2017, ICLR.
[24] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[25] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[26] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[27] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[28] H. Shimodaira,et al. Improving predictive inference under covariate shift by weighting the log-likelihood function , 2000 .
[29] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[30] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[31] Pascal Frossard,et al. Measuring the effect of nuisance variables on classifiers , 2016, BMVC.
[32] Micah Sherr,et al. Hidden Voice Commands , 2016, USENIX Security Symposium.
[33] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[34] Stefano Ermon,et al. Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models , 2017, AAAI.
[35] Vladimir N. Vapnik,et al. The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.
[36] Radha Poovendran,et al. Semantic Adversarial Examples , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[37] T. Tao. Topics in Random Matrix Theory , 2012 .
[38] Sameer Singh,et al. Generating Natural Adversarial Examples , 2017, ICLR.
[39] Moustapha Cissé,et al. Houdini: Fooling Deep Structured Prediction Models , 2017, ArXiv.
[40] Moustapha Cissé,et al. Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.
[41] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[42] Stefano Ermon,et al. Multi-Agent Generative Adversarial Imitation Learning , 2018, NeurIPS.
[43] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[44] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[45] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[46] Nicholas Carlini,et al. Unrestricted Adversarial Examples , 2018, ArXiv.
[47] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[48] Jungwoo Lee,et al. Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN , 2017, ArXiv.
[49] Wenyuan Xu,et al. DolphinAttack: Inaudible Voice Commands , 2017, CCS.
[50] Hyrum S. Anderson,et al. DeepDGA: Adversarially-Tuned Domain Generation and Detection , 2016, AISec@CCS.