暂无分享,去创建一个
Bernhard Schölkopf | David Lopez-Paz | Léon Bottou | Yann Ollivier | Carl-Johann Simon-Gabriel | L. Bottou | B. Schölkopf | Y. Ollivier | Carl-Johann Simon-Gabriel | David Lopez-Paz | B. Scholkopf
[1] David Mumford,et al. Statistics of natural images and models , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).
[2] Moustapha Cissé,et al. Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.
[3] Yann Ollivier,et al. Auto-encoders: reconstruction versus compression , 2014, ArXiv.
[4] Andrew Slavin Ross,et al. Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients , 2017, AAAI.
[5] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[6] Aleksander Madry,et al. Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.
[7] S. A. Sherman,et al. Providence , 1906 .
[8] Shie Mannor,et al. Robustness and Regularization of Support Vector Machines , 2008, J. Mach. Learn. Res..
[9] Kaizhu Huang,et al. A Unified Gradient Regularization Family for Adversarial Examples , 2015, 2015 IEEE International Conference on Data Mining.
[10] Jürgen Schmidhuber,et al. Simplifying Neural Nets by Discovering Flat Minima , 1994, NIPS.
[11] Tom Goldstein,et al. Are adversarial examples inevitable? , 2018, ICLR.
[12] David Balduzzi,et al. Neural Taylor Approximations: Convergence and Exploration in Rectifier Networks , 2016, ICML.
[13] Nathan Srebro,et al. The Implicit Bias of Gradient Descent on Separable Data , 2017, J. Mach. Learn. Res..
[14] Geoffrey E. Hinton,et al. Bayesian Learning for Neural Networks , 1995 .
[15] Pan He,et al. Adversarial Examples: Attacks and Defenses for Deep Learning , 2017, IEEE Transactions on Neural Networks and Learning Systems.
[16] A. Wald. Statistical Decision Functions Which Minimize the Maximum Risk , 1945 .
[17] Matthias Bethge,et al. Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models , 2017, ArXiv.
[18] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[19] John C. Duchi,et al. Certifiable Distributional Robustness with Principled Adversarial Training , 2017, ArXiv.
[20] Pascal Vincent,et al. Contractive Auto-Encoders: Explicit Invariance During Feature Extraction , 2011, ICML.
[21] Christopher M. Bishop,et al. Current address: Microsoft Research, , 2022 .
[22] Y. Le Cun,et al. Double backpropagation increasing generalization performance , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.
[23] Matthias Hein,et al. Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation , 2017, NIPS.
[24] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25] James Bailey,et al. The vulnerability of learning to adversarial perturbation increases with intrinsic dimensionality , 2017, 2017 IEEE Workshop on Information Forensics and Security (WIFS).
[26] M. V. Rossum,et al. In Neural Computation , 2022 .
[27] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[28] Martin Wattenberg,et al. Adversarial Spheres , 2018, ICLR.
[29] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[30] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.