暂无分享,去创建一个
Luyu Wang | Ruitong Huang | Gavin Weiguang Ding | Xiaomeng Jin | Kry Yik Chau Lui | Kry Yik-Chau Lui | Ruitong Huang | G. Ding | Luyu Wang | Xiaomeng Jin
[1] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[2] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[3] Xavier Gastaldi,et al. Shake-Shake regularization , 2017, ArXiv.
[4] K. Ball. An Elementary Introduction to Modern Convex Geometry , 1997 .
[5] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[6] Aleksander Madry,et al. Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.
[7] Dale Schuurmans,et al. Learning with a Strong Adversary , 2015, ArXiv.
[8] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[9] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[10] Yoshua Bengio,et al. A3T: Adversarially Augmented Adversarial Training , 2018, ArXiv.
[11] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[12] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[13] Richard Szeliski,et al. Computer Vision - Algorithms and Applications , 2011, Texts in Computer Science.
[14] Martin Wattenberg,et al. Adversarial Spheres , 2018, ICLR.
[15] David Warde-Farley,et al. 1 Adversarial Perturbations of Deep Neural Networks , 2016 .
[16] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[17] John F. Canny,et al. A Computational Approach to Edge Detection , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[18] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[19] K. Ball. An elementary introduction to modern convex geometry, in flavors of geometry , 1997 .
[20] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[21] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[22] John C. Duchi,et al. Certifying Some Distributional Robustness with Principled Adversarial Training , 2017, ICLR.
[23] Luyu Wang,et al. advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch , 2019, ArXiv.
[24] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[25] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[26] Pascal Frossard,et al. Analysis of classifiers’ robustness to adversarial perturbations , 2015, Machine Learning.
[27] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[28] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[29] Shie Mannor,et al. Robustness and Regularization of Support Vector Machines , 2008, J. Mach. Learn. Res..
[30] David A. Wagner,et al. Defensive Distillation is Not Robust to Adversarial Examples , 2016, ArXiv.
[31] M. Ledoux. The concentration of measure phenomenon , 2001 .
[32] John C. Duchi,et al. Certifiable Distributional Robustness with Principled Adversarial Training , 2017, ArXiv.