Robustness May Be at Odds with Accuracy
暂无分享,去创建一个
Aleksander Madry | Dimitris Tsipras | Logan Engstrom | Shibani Santurkar | Alexander Turner | A. Madry | Dimitris Tsipras | Shibani Santurkar | Logan Engstrom | Alexander Turner
[1] Deborah Silver,et al. Feature Visualization , 1994, Scientific Visualization.
[2] Pedro M. Domingos,et al. Adversarial classification , 2004, KDD.
[3] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[4] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[5] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[6] Shie Mannor,et al. Robustness and generalization , 2010, Machine Learning.
[7] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[8] Geoffrey E. Hinton,et al. Speech recognition with deep recurrent neural networks , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[9] Daniel Lowd,et al. Convex Adversarial Collective Classification , 2013, ICML.
[10] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[11] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[12] Jean-Philippe Vial,et al. Robust Optimization , 2021, ICORES.
[13] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[14] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[15] Daniel Lowd,et al. On Robustness and Regularization of Structural Support Vector Machines , 2014, ICML.
[16] Hod Lipson,et al. Understanding Neural Networks Through Deep Visualization , 2015, ArXiv.
[17] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[18] Pascal Frossard,et al. Manitest: Are classifiers really invariant? , 2015, BMVC.
[19] Ira Kemelmacher-Shlizerman,et al. What Makes Tom Hanks Look Like Tom Hanks , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[20] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[21] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[22] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[23] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[24] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Seyed-Mohsen Moosavi-Dezfooli,et al. Robustness of classifiers: from adversarial to random noise , 2016, NIPS.
[27] Ira Kemelmacher-Shlizerman,et al. Transfiguring portraits , 2016, ACM Trans. Graph..
[28] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[29] Demis Hassabis,et al. Mastering the game of Go with deep neural networks and tree search , 2016, Nature.
[30] Atul Prakash,et al. Robust Physical-World Attacks on Machine Learning Models , 2017, ArXiv.
[31] Xavier Gastaldi,et al. Shake-Shake regularization , 2017, ArXiv.
[32] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[33] John C. Duchi,et al. Certifiable Distributional Robustness with Principled Adversarial Training , 2017, ArXiv.
[34] Aleksander Madry,et al. A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations , 2017, ArXiv.
[35] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[36] Demis Hassabis,et al. Mastering the game of Go without human knowledge , 2017, Nature.
[37] Robert Pless,et al. Deep Feature Interpolation for Image Content Changes , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[38] Dawn Song,et al. Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.
[39] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[40] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[41] Russ Tedrake,et al. Verifying Neural Networks with Mixed Integer Programming , 2017, ArXiv.
[42] Pascal Frossard,et al. Analysis of classifiers’ robustness to adversarial perturbations , 2015, Machine Learning.
[43] J. Zico Kolter,et al. Scaling provable adversarial defenses , 2018, NeurIPS.
[44] Somesh Jha,et al. Analyzing the Robustness of Nearest Neighbors to Adversarial Examples , 2017, ICML.
[45] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[46] Pushmeet Kohli,et al. Training verified learners with learned verifiers , 2018, ArXiv.
[47] Fabio Roli,et al. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2018, CCS.
[48] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[49] Hamza Fawzi,et al. Adversarial vulnerability for any classifier , 2018, NeurIPS.
[50] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[51] Aleksander Madry,et al. Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.
[52] John C. Duchi,et al. Certifying Some Distributional Robustness with Principled Adversarial Training , 2017, ICLR.
[53] Jinfeng Yi,et al. Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models , 2018, ECCV.
[54] Pushmeet Kohli,et al. A Dual Approach to Scalable Verification of Deep Networks , 2018, UAI.
[55] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[56] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[57] Andrew Slavin Ross,et al. Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients , 2017, AAAI.
[58] Mingyan Liu,et al. Spatially Transformed Adversarial Examples , 2018, ICLR.
[59] Martin Wattenberg,et al. Adversarial Spheres , 2018, ICLR.
[60] Harini Kannan,et al. Adversarial Logit Pairing , 2018, NIPS 2018.
[61] Pushmeet Kohli,et al. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks , 2018, ICML.
[62] Aleksander Madry,et al. Exploring the Landscape of Spatial Robustness , 2017, ICML.
[63] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[64] Russ Tedrake,et al. Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.
[65] Ilya P. Razenshteyn,et al. Adversarial examples from computational constraints , 2018, ICML.
[66] Aleksander Madry,et al. Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability , 2018, ICLR.