Theoretically Principled Trade-off between Robustness and Accuracy
暂无分享,去创建一个
Michael I. Jordan | Eric P. Xing | Laurent El Ghaoui | Yaodong Yu | Jiantao Jiao | Hongyang Zhang | Hongyang R. Zhang | E. Xing | L. Ghaoui | Jiantao Jiao | Yaodong Yu | Hongyang Zhang
[1] F. Barthe. Extremal Properties of Central Half-Spaces for Product Measures , 2001 .
[2] Tong Zhang,et al. Covering Number Bounds of Certain Regularized Linear Function Classes , 2002, J. Mach. Learn. Res..
[3] Michael I. Jordan,et al. Convexity, Classification, and Risk Bounds , 2006 .
[4] Prasad Raghavendra,et al. Hardness of Learning Halfspaces with Noise , 2006, 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06).
[5] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[6] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[7] Uri Shaham,et al. Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization , 2015, ArXiv.
[8] Dale Schuurmans,et al. Learning with a Strong Adversary , 2015, ArXiv.
[9] Csaba Szepesvári,et al. Multiclass Classification Calibration Functions , 2016, ArXiv.
[10] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[11] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Yang Song,et al. Improving the Robustness of Deep Neural Networks via Stability Training , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Eduardo Valle,et al. Exploring the space of adversarial images , 2015, 2016 International Joint Conference on Neural Networks (IJCNN).
[15] Moustapha Cissé,et al. Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.
[16] Hao Chen,et al. MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.
[17] Dawn Xiaodong Song,et al. Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong , 2017, ArXiv.
[18] Matthias Bethge,et al. Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models , 2017, ArXiv.
[19] John C. Duchi,et al. Certifiable Distributional Robustness with Principled Adversarial Training , 2017, ArXiv.
[20] Aleksander Madry,et al. A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations , 2017, ArXiv.
[21] Sandy H. Huang,et al. Adversarial Attacks on Neural Network Policies , 2017, ICLR.
[22] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[23] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[24] Alan L. Yuille,et al. Adversarial Examples for Semantic Segmentation and Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[25] Percy Liang,et al. Adversarial Examples for Evaluating Reading Comprehension Systems , 2017, EMNLP.
[26] Nicholas Carlini,et al. Unrestricted Adversarial Examples , 2018, ArXiv.
[27] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[28] Ryan P. Adams,et al. Motivating the Rules of the Game for Adversarial Example Research , 2018, ArXiv.
[29] Matthias Bethge,et al. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models , 2017, ICLR.
[30] J. Zico Kolter,et al. Scaling provable adversarial defenses , 2018, NeurIPS.
[31] Kamyar Azizzadenesheli,et al. Stochastic Activation Pruning for Robust Adversarial Defense , 2018, ICLR.
[32] Yin Tat Lee,et al. Adversarial Examples from Cryptographic Pseudo-Random Generators , 2018, ArXiv.
[33] Saibal Mukhopadhyay,et al. Cascade Adversarial Machine Learning Regularized with a Unified Embedding , 2017, ICLR.
[34] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[35] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[36] Prateek Mittal,et al. PAC-learning in the presence of adversaries , 2018, NeurIPS.
[37] Logan Engstrom,et al. Evaluating and Understanding the Robustness of Adversarial Logit Pairing , 2018, ArXiv.
[38] Hamza Fawzi,et al. Adversarial vulnerability for any classifier , 2018, NeurIPS.
[39] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[40] Aditi Raghunathan,et al. Semidefinite relaxations for certifying robustness to adversarial examples , 2018, NeurIPS.
[41] Aleksander Madry,et al. Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.
[42] John C. Duchi,et al. Certifying Some Distributional Robustness with Principled Adversarial Training , 2017, ICLR.
[43] Jinfeng Yi,et al. Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models , 2018, ECCV.
[44] Matthias Bethge,et al. Adversarial Vision Challenge , 2018, The NeurIPS '18 Competition.
[45] Colin Raffel,et al. Thermometer Encoding: One Hot Way To Resist Adversarial Examples , 2018, ICLR.
[46] Jun Zhu,et al. Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[47] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[48] James Bailey,et al. Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality , 2018, ICLR.
[49] Andrew Slavin Ross,et al. Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients , 2017, AAAI.
[50] Silvio Savarese,et al. Generalizing to Unseen Domains via Adversarial Data Augmentation , 2018, NeurIPS.
[51] Mingyan Liu,et al. Spatially Transformed Adversarial Examples , 2018, ICLR.
[52] Hongyang Zhang,et al. Deep Neural Networks with Multi-Branch Architectures Are Less Non-Convex , 2018, ArXiv.
[53] Yang Song,et al. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples , 2017, ICLR.
[54] Harini Kannan,et al. Adversarial Logit Pairing , 2018, NIPS 2018.
[55] Pushmeet Kohli,et al. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks , 2018, ICML.
[56] Rama Chellappa,et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models , 2018, ICLR.
[57] Pengtao Xie,et al. Stackelberg GAN: Towards Provable Minimax Equilibrium via Multi-Generator Architectures , 2018, ArXiv.
[58] Aleksander Madry,et al. Exploring the Landscape of Spatial Robustness , 2017, ICML.
[59] Ruslan Salakhutdinov,et al. Deep Neural Networks with Multi-Branch Architectures Are Intrinsically Less Non-Convex , 2019, AISTATS.
[60] Kannan Ramchandran,et al. Rademacher Complexity for Adversarially Robust Generalization , 2018, ICML.
[61] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[62] Giovanni S. Alberti,et al. ADef: an Iterative Algorithm to Construct Adversarial Deformations , 2018, ICLR.
[63] Alan L. Yuille,et al. Feature Denoising for Improving Adversarial Robustness , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[64] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[65] Di He,et al. Adversarially Robust Generalization Just Requires More Unlabeled Data , 2019, ArXiv.
[66] Po-Sen Huang,et al. Are Labels Required for Improving Adversarial Robustness? , 2019, NeurIPS.
[67] Ludwig Schmidt,et al. Unlabeled Data Improves Adversarial Robustness , 2019, NeurIPS.
[68] Ilya P. Razenshteyn,et al. Adversarial examples from computational constraints , 2018, ICML.
[69] Larry S. Davis,et al. Adversarial Training for Free! , 2019, NeurIPS.
[70] Bin Dong,et al. You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle , 2019, NeurIPS.
[71] Inderjit S. Dhillon,et al. The Limitations of Adversarial Training and the Blind-Spot Attack , 2019, ICLR.