Semidefinite relaxations for certifying robustness to adversarial examples
暂无分享,去创建一个
Aditi Raghunathan | Percy Liang | Jacob Steinhardt | Percy Liang | J. Steinhardt | Aditi Raghunathan
[1] Miss A.O. Penney. (b) , 1974, The New Yale Book of Quotations.
[2] J. Lofberg,et al. YALMIP : a toolbox for modeling and optimization in MATLAB , 2004, 2004 IEEE International Conference on Robotics and Automation (IEEE Cat. No.04CH37508).
[3] Roman Vershynin,et al. Introduction to the non-asymptotic analysis of random matrices , 2010, Compressed Sensing.
[4] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[5] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[6] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[7] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[8] Micah Sherr,et al. Hidden Voice Commands , 2016, USENIX Security Symposium.
[9] Matthias Hein,et al. Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation , 2017, NIPS.
[10] David A. Forsyth,et al. NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles , 2017, ArXiv.
[11] Rüdiger Ehlers,et al. Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.
[12] Atul Prakash,et al. Robust Physical-World Attacks on Machine Learning Models , 2017, ArXiv.
[13] Mykel J. Kochenderfer,et al. Towards Proving the Adversarial Robustness of Deep Neural Networks , 2017, FVAV@iFM.
[14] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[15] David L. Dill,et al. Ground-Truth Adversarial Examples , 2017, ArXiv.
[16] Sandy H. Huang,et al. Adversarial Attacks on Neural Network Policies , 2017, ICLR.
[17] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[18] Martín Abadi,et al. Adversarial Patch , 2017, ArXiv.
[19] Russ Tedrake,et al. Verifying Neural Networks with Mixed Integer Programming , 2017, ArXiv.
[20] Percy Liang,et al. Adversarial Examples for Evaluating Reading Comprehension Systems , 2017, EMNLP.
[21] J. Zico Kolter,et al. Scaling provable adversarial defenses , 2018, NeurIPS.
[22] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[23] Pushmeet Kohli,et al. Training verified learners with learned verifiers , 2018, ArXiv.
[24] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[25] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[26] Pushmeet Kohli,et al. A Dual Approach to Scalable Verification of Deep Networks , 2018, UAI.
[27] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[28] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[29] Inderjit S. Dhillon,et al. Towards Fast Computation of Certified Robustness for ReLU Networks , 2018, ICML.
[30] Amir Ali Ahmadi,et al. DSOS and SDSOS Optimization: More Tractable Alternatives to Sum of Squares and Semidefinite Optimization , 2017, SIAM J. Appl. Algebra Geom..