暂无分享,去创建一个
[1] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[2] Yuanzhi Li,et al. Feature Purification: How Adversarial Training Performs Robust Deep Learning , 2020, 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS).
[3] Tselil Schramm,et al. Non-asymptotic approximations of neural networks by Gaussian processes , 2021, COLT.
[4] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[5] Gábor Lugosi,et al. Concentration Inequalities - A Nonasymptotic Theory of Independence , 2013, Concentration Inequalities.
[6] G. B. Arous,et al. Geometry and Temperature Chaos in Mixed Spherical Spin Glasses at Low Temperature: The Perturbative Regime , 2018, Communications on Pure and Applied Mathematics.
[7] Martin J. Wainwright,et al. High-Dimensional Statistics , 2019 .
[8] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[10] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[11] Amit Daniely,et al. Most ReLU Networks Suffer from 𝓁2 Adversarial Perturbations , 2020, ArXiv.
[12] Pushmeet Kohli,et al. Adversarial Robustness through Local Linearization , 2019, NeurIPS.
[13] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[14] Seyed-Mohsen Moosavi-Dezfooli,et al. Robustness via Curvature Regularization, and Vice Versa , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[15] Adi Shamir,et al. A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance , 2019, ArXiv.