Adversarially Robust Generalization Requires More Data
暂无分享,去创建一个
Aleksander Madry | Ludwig Schmidt | Kunal Talwar | Dimitris Tsipras | Shibani Santurkar | A. Madry | Kunal Talwar | Ludwig Schmidt | Dimitris Tsipras | Shibani Santurkar
[1] A. Wald. Statistical Decision Functions Which Minimize the Maximum Risk , 1945 .
[2] Ming Li,et al. Learning in the presence of malicious errors , 1993, STOC '88.
[3] Eyal Kushilevitz,et al. PAC learning with nasty noise , 1999, Theor. Comput. Sci..
[4] R. Schapire,et al. Toward efficient agnostic learning , 1992, COLT '92.
[5] Pedro M. Domingos,et al. Adversarial classification , 2004, KDD.
[6] Christopher Meek,et al. Adversarial learning , 2005, KDD '05.
[7] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[8] Shie Mannor,et al. Robustness and Regularization of Support Vector Machines , 2008, J. Mach. Learn. Res..
[9] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[10] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[11] Shie Mannor,et al. Robustness and generalization , 2010, Machine Learning.
[12] Gábor Lugosi,et al. Concentration Inequalities - A Nonasymptotic Theory of Independence , 2013, Concentration Inequalities.
[13] Jianqing Fan,et al. High-Dimensional Statistics , 2014 .
[14] Jean-Philippe Vial,et al. Robust Optimization , 2021, ICORES.
[15] Shai Ben-David,et al. Understanding Machine Learning: From Theory to Algorithms , 2014 .
[16] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[17] Amaury Habrard,et al. Robustness and generalization for metric learning , 2012, Neurocomputing.
[18] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[19] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[20] Nina Narodytska,et al. Simple Black-Box Adversarial Perturbations for Deep Networks , 2016, ArXiv.
[21] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Seyed-Mohsen Moosavi-Dezfooli,et al. Robustness of classifiers: from adversarial to random noise , 2016, NIPS.
[24] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[25] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[26] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[27] David A. Wagner,et al. Defensive Distillation is Not Robust to Adversarial Examples , 2016, ArXiv.
[28] Patrick D. McDaniel,et al. Adversarial Perturbations Against Deep Neural Networks for Malware Classification , 2016, ArXiv.
[29] Micah Sherr,et al. Hidden Voice Commands , 2016, USENIX Security Symposium.
[30] Michael P. Wellman,et al. Towards the Science of Security and Privacy in Machine Learning , 2016, ArXiv.
[31] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[32] Facebook,et al. Houdini : Fooling Deep Structured Visual and Speech Recognition Models with Adversarial Examples , 2017 .
[33] Arslan Munir,et al. Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks , 2017, MLDM.
[34] Dan Boneh,et al. The Space of Transferable Adversarial Examples , 2017, ArXiv.
[35] Wenyuan Xu,et al. DolphinAttack: Inaudible Voice Commands , 2017, CCS.
[36] Dawn Xiaodong Song,et al. Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong , 2017, ArXiv.
[37] Aleksander Madry,et al. A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations , 2017, ArXiv.
[38] Sandy H. Huang,et al. Adversarial Attacks on Neural Network Policies , 2017, ICLR.
[39] Trevor Darrell,et al. Can you fool AI with adversarial examples on a visual Turing test? , 2017, ArXiv.
[40] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[41] Prateek Mittal,et al. POSTER: Inaudible Voice Commands , 2017, CCS.
[42] Percy Liang,et al. Adversarial Examples for Evaluating Reading Comprehension Systems , 2017, EMNLP.
[43] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[44] Somesh Jha,et al. Analyzing the Robustness of Nearest Neighbors to Adversarial Examples , 2017, ICML.
[45] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[46] Fabio Roli,et al. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2018, CCS.
[47] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[48] David A. Wagner,et al. Audio Adversarial Examples: Targeted Attacks on Speech-to-Text , 2018, 2018 IEEE Security and Privacy Workshops (SPW).
[49] Hamza Fawzi,et al. Adversarial vulnerability for any classifier , 2018, NeurIPS.
[50] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[51] Philip H. S. Torr,et al. On the Robustness of Semantic Segmentation Models to Adversarial Attacks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[52] John C. Duchi,et al. Certifying Some Distributional Robustness with Principled Adversarial Training , 2017, ICLR.
[53] Dawn Xiaodong Song,et al. Adversarial Examples for Generative Models , 2017, 2018 IEEE Security and Privacy Workshops (SPW).
[54] Abdullah Al-Dujaili,et al. Adversarial Deep Learning for Robust Detection of Binary Encoded Malware , 2018, 2018 IEEE Security and Privacy Workshops (SPW).
[55] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[56] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[57] Quoc V. Le,et al. Intriguing Properties of Adversarial Examples , 2017, ICLR.
[58] Mingyan Liu,et al. Spatially Transformed Adversarial Examples , 2018, ICLR.
[59] Martin Wattenberg,et al. Adversarial Spheres , 2018, ICLR.
[60] Kouichi Sakurai,et al. One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.