Provably robust classification of adversarial examples with detection
暂无分享,去创建一个
J. Zico Kolter | Fatemeh Sheikholeslami | J. Z. Kolter | Ali Lotfi | A. Lotfi | Fatemeh Sheikholeslami
[1] Matthias Hein,et al. Certifiably Adversarially Robust Detection of Out-of-Distribution Data , 2020, NeurIPS.
[2] Rüdiger Ehlers,et al. Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.
[3] Georgios B. Giannakis,et al. Minimum Uncertainty Based Detection of Adversaries in Deep Neural Networks , 2019, 2020 Information Theory and Applications Workshop (ITA).
[4] Timothy A. Mann,et al. On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models , 2018, ArXiv.
[5] Peilin Zhong,et al. Resisting Adversarial Attacks by k-Winners-Take-All , 2019, ArXiv.
[6] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[7] Dina Katabi,et al. ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation , 2019, ICML.
[8] Cho-Jui Hsieh,et al. A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks , 2019, NeurIPS.
[9] Matthew Mirman,et al. A Provable Defense for Deep Residual Networks , 2019, ArXiv.
[10] Florian Tramèr,et al. On Adaptive Attacks to Adversarial Example Defenses , 2020, NeurIPS.
[11] Bernt Schiele,et al. Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks , 2019, ICML.
[12] Yarin Gal,et al. Understanding Measures of Uncertainty for Adversarial Example Detection , 2018, UAI.
[13] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[14] Ning Chen,et al. Improving Adversarial Robustness via Promoting Ensemble Diversity , 2019, ICML.
[15] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[16] J. Zico Kolter,et al. Fast is better than free: Revisiting adversarial training , 2020, ICLR.
[17] Mislav Balunovic,et al. Adversarial Training and Provable Defenses: Bridging the Gap , 2020, ICLR.
[18] Matthew Mirman,et al. Differentiable Abstract Interpretation for Provably Robust Neural Networks , 2018, ICML.
[19] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[20] Manfred Morari,et al. Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks , 2019, NeurIPS.
[21] Pushmeet Kohli,et al. A Dual Approach to Scalable Verification of Deep Networks , 2018, UAI.
[22] Thomas Hofmann,et al. The Odds are Odd: A Statistical Test for Detecting Adversarial Examples , 2019, ICML.
[23] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[24] Russ Tedrake,et al. Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.
[25] Soheil Feizi,et al. Playing it Safe: Adversarial Robustness with an Abstain Option , 2019, ArXiv.
[26] Ananthram Swami,et al. Error Correcting Output Codes Improve Probability Estimation and Adversarial Robustness of Deep Neural Networks , 2019, NeurIPS.
[27] Mitali Bafna,et al. Thwarting Adversarial Examples: An L_0-Robust Sparse Fourier Transform , 2018, NeurIPS.
[28] Larry S. Davis,et al. Adversarial Training for Free! , 2019, NeurIPS.
[29] Gustavo K. Rohde,et al. GAT: Generative Adversarial Training for Adversarial Example Detection and Robust Classification , 2020, ICLR.
[30] Aleksander Madry,et al. Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability , 2018, ICLR.
[31] Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks , 2019, ICLR.
[32] Cho-Jui Hsieh,et al. Towards Stable and Efficient Training of Verifiably Robust Neural Networks , 2019, ICLR.