暂无分享,去创建一个
Peter L. Bartlett | Yeshwanth Cherapanamjeri | S'ebastien Bubeck | P. Bartlett | Sébastien Bubeck | Yeshwanth Cherapanamjeri
[1] Lawrence K. Saul,et al. Kernel Methods for Deep Learning , 2009, NIPS.
[2] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[3] Mark Sellke,et al. A Universal Law of Robustness via Isoperimetry , 2021, ArXiv.
[4] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[5] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[6] Yeshwanth Cherapanamjeri,et al. A single gradient step finds adversarial examples on random two-layers neural networks , 2021, ArXiv.
[7] Pushmeet Kohli,et al. Adversarial Robustness through Local Linearization , 2019, NeurIPS.
[8] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[9] Amit Daniely,et al. Most ReLU Networks Suffer from 𝓁2 Adversarial Perturbations , 2020, ArXiv.
[10] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[11] V. Koltchinskii,et al. High Dimensional Probability , 2006, math/0612726.
[12] Adi Shamir,et al. A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance , 2019, ArXiv.
[13] Gábor Lugosi,et al. Concentration Inequalities , 2008, COLT.
[14] Ilya P. Razenshteyn,et al. Adversarial examples from computational constraints , 2018, ICML.
[15] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).