A Closer Look at Accuracy vs. Robustness
暂无分享,去创建一个
Cyrus Rashtchian | Ruslan Salakhutdinov | Kamalika Chaudhuri | Hongyang Zhang | Yao-Yuan Yang | R. Salakhutdinov | Kamalika Chaudhuri | Cyrus Rashtchian | Yao-Yuan Yang | Hongyang Zhang
[1] Aleksander Madry,et al. Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.
[2] Daniel Cullina,et al. Lower Bounds on Adversarial Robustness from Optimal Transport , 2019, NeurIPS.
[3] Elvis Dohmatob,et al. Generalized No Free Lunch Theorem for Adversarial Robustness , 2018, ICML.
[4] Di He,et al. Adversarially Robust Generalization Just Requires More Unlabeled Data , 2019, ArXiv.
[5] Matthias Hein,et al. Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation , 2017, NIPS.
[6] Pushmeet Kohli,et al. Adversarial Robustness through Local Linearization , 2019, NeurIPS.
[7] Somesh Jha,et al. Analyzing the Robustness of Nearest Neighbors to Adversarial Examples , 2017, ICML.
[8] Kamalika Chaudhuri,et al. When are Non-Parametric Methods Robust? , 2020, ICML.
[9] Ludwig Schmidt,et al. Unlabeled Data Improves Adversarial Robustness , 2019, NeurIPS.
[10] Jakob Verbeek,et al. Convolutional Neural Fabrics , 2016, NIPS.
[11] Andrew Slavin Ross,et al. Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients , 2017, AAAI.
[12] Gang Niu,et al. Where is the Bottleneck of Adversarial Learning with Unlabeled Data? , 2019, ArXiv.
[13] Yair Weiss,et al. A Bayes-Optimal View on Adversarial Examples , 2020, J. Mach. Learn. Res..
[14] Rui Xu,et al. When NAS Meets Robustness: In Search of Robust Architectures Against Adversarial Attacks , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[15] Daniel Kifer,et al. Unifying Adversarial Training Algorithms with Data Gradient Regularization , 2017, Neural Computation.
[16] Matthew Mirman,et al. Fast and Effective Robustness Certification , 2018, NeurIPS.
[17] Mung Chiang,et al. Analyzing the Robustness of Open-World Machine Learning , 2019, AISec@CCS.
[18] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[19] Uri Shaham,et al. Understanding adversarial training: Increasing local stability of supervised models through robust optimization , 2015, Neurocomputing.
[20] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[21] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[22] Vatsal Sharan,et al. A Spectral View of Adversarially Robust Features , 2018, NeurIPS.
[23] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[24] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[25] Po-Sen Huang,et al. An Alternative Surrogate Loss for PGD-based Adversarial Testing , 2019, ArXiv.
[26] Aditi Raghunathan,et al. Adversarial Training Can Hurt Generalization , 2019, ArXiv.
[27] Christopher Meek,et al. Adversarial learning , 2005, KDD '05.
[28] Ulrike von Luxburg,et al. Distance-Based Classification with Lipschitz Functions , 2004, J. Mach. Learn. Res..
[29] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[30] Greg Yang,et al. Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers , 2019, NeurIPS.
[31] J. Zico Kolter,et al. Overfitting in adversarially robust deep learning , 2020, ICML.
[32] Ananthram Swami,et al. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples , 2016, ArXiv.
[33] Muni Sreenivas Pydi,et al. Adversarial Risk via Optimal Transport and Optimal Couplings , 2020, ICML.
[34] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[35] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[36] Ilya P. Razenshteyn,et al. Adversarial examples from computational constraints , 2018, ICML.
[37] Ritu Chadha,et al. Limitations of the Lipschitz constant as a defense against adversarial examples , 2018, Nemesis/UrbReas/SoGood/IWAISe/GDM@PKDD/ECML.
[38] Baoyuan Wu,et al. Toward Adversarial Robustness via Semi-supervised Robust Training , 2020, ArXiv.
[39] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[40] Lawrence Carin,et al. Certified Adversarial Robustness with Additive Gaussian Noise , 2018, NeurIPS 2019.
[41] Yu Cheng,et al. Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[42] Adam M. Oberman,et al. Scaleable input gradient regularization for adversarial robustness , 2019, Machine Learning with Applications.
[43] James Bailey,et al. Improving Adversarial Robustness Requires Revisiting Misclassified Examples , 2020, ICLR.
[44] Cyrus Rashtchian,et al. Robustness for Non-Parametric Classification: A Generic Attack and Defense , 2020, AISTATS.
[45] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[46] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[47] John C. Duchi,et al. Certifying Some Distributional Robustness with Principled Adversarial Training , 2017, ICLR.
[48] Po-Sen Huang,et al. Are Labels Required for Improving Adversarial Robustness? , 2019, NeurIPS.
[49] Cem Anil,et al. Sorting out Lipschitz function approximation , 2018, ICML.
[50] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[51] Aravindan Vijayaraghavan,et al. On Robustness to Adversarial Examples and Polynomial Optimization , 2019, NeurIPS.
[52] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[53] Jinfeng Yi,et al. Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models , 2018, ECCV.
[54] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[55] Jinghui Chen,et al. Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models , 2020, AISTATS.
[56] Larry S. Davis,et al. Adversarial Training for Free! , 2019, NeurIPS.
[57] Gang Niu,et al. Attacks Which Do Not Kill Training Make Adversarial Learning Stronger , 2020, ICML.
[58] John Duchi,et al. Understanding and Mitigating the Tradeoff Between Robustness and Accuracy , 2020, ICML.
[59] Chrisantha Fernando,et al. PathNet: Evolution Channels Gradient Descent in Super Neural Networks , 2017, ArXiv.
[60] Hisashi Kashima,et al. Theoretical evidence for adversarial robustness through randomization: the case of the Exponential family , 2019, ArXiv.
[61] Jinfeng Yi,et al. Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach , 2018, ICLR.
[62] Hamza Fawzi,et al. Adversarial vulnerability for any classifier , 2018, NeurIPS.
[63] Shie Mannor,et al. Robustness and Regularization of Support Vector Machines , 2008, J. Mach. Learn. Res..