Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks
暂无分享,去创建一个
Pranjal Awasthi | Mehryar Mohri | Natalie Frank | Natalie S. Frank | M. Mohri | Pranjal Awasthi | Natalie Frank
[1] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[2] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[3] Nathan Srebro,et al. VC Classes are Adversarially Robustly Learnable, but Only Improperly , 2019, COLT.
[4] M. Talagrand,et al. Probability in Banach Spaces: Isoperimetry and Processes , 1991 .
[5] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[6] Matus Telgarsky,et al. Spectrally-normalized margin bounds for neural networks , 2017, NIPS.
[7] Matthias Bethge,et al. Towards the first adversarially robust neural network model on MNIST , 2018, ICLR.
[8] Aleksander Madry,et al. Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.
[9] Ambuj Tewari,et al. On the Complexity of Linear Prediction: Risk Bounds, Margin Bounds, and Regularization , 2008, NIPS.
[10] Colin Wei,et al. Improved Sample Complexities for Deep Networks and Robust Classification via an All-Layer Margin , 2019, ArXiv.
[11] Aditi Raghunathan,et al. Adversarial Training Can Hurt Generalization , 2019, ArXiv.
[12] L. Polyakova. On minimizing the sum of a convex function and a concave function , 1986 .
[13] Ming Li,et al. Learning in the presence of malicious errors , 1993, STOC '88.
[14] Atul Prakash,et al. Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[15] Pranjal Awasthi,et al. On the Rademacher Complexity of Linear Hypothesis Sets , 2020, ArXiv.
[16] Mehryar Mohri,et al. AdaNet: Adaptive Structural Learning of Artificial Neural Networks , 2016, ICML.
[17] Po-Sen Huang,et al. An Alternative Surrogate Loss for PGD-based Adversarial Testing , 2019, ArXiv.
[18] Preetum Nakkiran,et al. Adversarial Robustness May Be at Odds With Simplicity , 2019, ArXiv.
[19] J. Wissel,et al. On the Best Constants in the Khintchine Inequality , 2007 .
[20] Po-Ling Loh,et al. Adversarial Risk Bounds via Function Transformation , 2018 .
[21] Ameet Talwalkar,et al. Foundations of Machine Learning , 2012, Adaptive computation and machine learning.
[22] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[23] Norbert Sauer,et al. On the Density of Families of Sets , 1972, J. Comb. Theory A.
[24] Kannan Ramchandran,et al. Rademacher Complexity for Adversarially Robust Generalization , 2018, ICML.
[25] Pin-Yu Chen,et al. Attacking the Madry Defense Model with L1-based Adversarial Examples , 2017, ICLR.
[26] Aravindan Vijayaraghavan,et al. On Robustness to Adversarial Examples and Polynomial Optimization , 2019, NeurIPS.
[27] Po-Ling Loh,et al. Adversarial Risk Bounds for Binary Classification via Function Transformation , 2018, ArXiv.
[28] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[29] David A. Wagner,et al. Audio Adversarial Examples: Targeted Attacks on Speech-to-Text , 2018, 2018 IEEE Security and Privacy Workshops (SPW).
[30] Ilya P. Razenshteyn,et al. Adversarial examples from computational constraints , 2018, ICML.
[31] R. Schapire,et al. Toward efficient agnostic learning , 1992, COLT '92.
[32] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[33] S. Shelah. A combinatorial problem; stability and order for models and theories in infinitary languages. , 1972 .
[34] Horst Alzer,et al. On some inequalities for the gamma and psi functions , 1997, Math. Comput..
[35] Yishay Mansour,et al. Improved generalization bounds for robust learning , 2018, ALT.
[36] Vinod Vaikuntanathan,et al. Computational Limitations in Robust Classification and Win-Win Results , 2019, IACR Cryptol. ePrint Arch..
[37] Corinna Cortes,et al. Relative Deviation Margin Bounds , 2020, ICML.
[38] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[39] Uriel Feige,et al. Learning and inference in the presence of corrupted inputs , 2015, COLT.
[40] Timothy A. Mann,et al. On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models , 2018, ArXiv.
[41] Yin Tat Lee,et al. Adversarial Examples from Cryptographic Pseudo-Random Generators , 2018, ArXiv.
[42] V. Koltchinskii,et al. Empirical margin distributions and bounding the generalization error of combined classifiers , 2002, math/0405343.
[43] Ronald F. Boisvert,et al. NIST Handbook of Mathematical Functions , 2010 .