暂无分享,去创建一个
Liwei Wang | Di He | Bohang Zhang | Du Jiang
[1] Tom Goldstein,et al. Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness , 2020, ICML.
[2] Tengyu Ma,et al. Fixup Initialization: Residual Learning Without Normalization , 2019, ICLR.
[3] Cyrus Rashtchian,et al. A Closer Look at Accuracy vs. Robustness , 2020, NeurIPS.
[4] Junfeng Yang,et al. Efficient Formal Safety Analysis of Neural Networks , 2018, NeurIPS.
[5] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[6] Frank Allgöwer,et al. Training Robust Neural Networks Using Lipschitz Bounds , 2020, IEEE Control Systems Letters.
[7] Matthias Hein,et al. Provable Robustness of ReLU networks via Maximization of Linear Regions , 2018, AISTATS.
[8] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[9] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[10] Bernhard Pfahringer,et al. Regularisation of neural networks by enforcing Lipschitz continuity , 2018, Machine Learning.
[11] Timothy A. Mann,et al. On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models , 2018, ArXiv.
[12] Pushmeet Kohli,et al. A Dual Approach to Scalable Verification of Deep Networks , 2018, UAI.
[13] Cem Anil,et al. Sorting out Lipschitz function approximation , 2018, ICML.
[14] J. Zico Kolter,et al. Scaling provable adversarial defenses , 2018, NeurIPS.
[15] Moustapha Cissé,et al. Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.
[16] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[17] Matthias Hein,et al. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks , 2020, ICML.
[18] Ritu Chadha,et al. Limitations of the Lipschitz constant as a defense against adversarial examples , 2018, Nemesis/UrbReas/SoGood/IWAISe/GDM@PKDD/ECML.
[19] Avrim Blum,et al. Random Smoothing Might be Unable to Certify 𝓁∞ Robustness for High-Dimensional Images , 2020, J. Mach. Learn. Res..
[20] Jaewook Lee,et al. Lipschitz-Certifiable Training with a Tight Outer Bound , 2020, NeurIPS.
[21] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[22] Jinwoo Shin,et al. Consistency Regularization for Certified Robustness of Smoothed Classifiers , 2020, NeurIPS.
[23] Aleksander Madry,et al. Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability , 2018, ICLR.
[24] Swarat Chaudhuri,et al. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[25] Mislav Balunovic,et al. Adversarial Training and Provable Defenses: Bridging the Gap , 2020, ICLR.
[26] Aleksander Madry,et al. On Adaptive Attacks to Adversarial Example Defenses , 2020, NeurIPS.
[27] Ilya P. Razenshteyn,et al. Randomized Smoothing of All Shapes and Sizes , 2020, ICML.
[28] Dahua Lin,et al. Towards Evaluating and Training Verifiably Robust Neural Networks , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[29] Liwei Wang,et al. Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons , 2021, ICML.
[30] Suman Jana,et al. Certified Robustness to Adversarial Examples with Differential Privacy , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[31] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[32] Pushmeet Kohli,et al. Efficient Neural Network Verification with Exactness Characterization , 2019, UAI.
[33] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[34] Ruitong Huang,et al. Max-Margin Adversarial (MMA) Training: Direct Input Space Margin Maximization through Adversarial Training , 2018, ICLR.
[35] Matthew Mirman,et al. The Fundamental Limits of Interval Arithmetic for Neural Networks , 2021, ArXiv.
[36] Jinfeng Yi,et al. Fast Certified Robust Training with Short Warmup , 2021 .
[37] Pradeep Ravikumar,et al. MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius , 2020, ICLR.
[38] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[39] Haifeng Qian,et al. L2-Nonexpansive Neural Networks , 2018, ICLR.
[40] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[41] Pushmeet Kohli,et al. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks , 2018, ICML.
[42] Qiang Liu,et al. Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework , 2020, NeurIPS.
[43] Sahil Singla,et al. Skew Orthogonal Convolutions , 2021, ICML.
[44] Greg Yang,et al. Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers , 2019, NeurIPS.
[45] Yuichi Yoshida,et al. Spectral Norm Regularization for Improving the Generalizability of Deep Learning , 2017, ArXiv.
[46] Cho-Jui Hsieh,et al. Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond , 2020, NeurIPS.
[47] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[48] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[49] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[50] Stephan Günnemann,et al. Completing the Picture: Randomized Smoothing Suffers from the Curse of Dimensionality for a Large Family of Distributions , 2021, AISTATS.
[51] Matt Fredrikson,et al. Globally-Robust Neural Networks , 2021, ICML.
[52] James Bailey,et al. Improving Adversarial Robustness Requires Revisiting Misclassified Examples , 2020, ICLR.
[53] David Tse,et al. Generalizable Adversarial Training via Spectral Normalization , 2018, ICLR.
[54] Matthew Mirman,et al. Fast and Effective Robustness Certification , 2018, NeurIPS.
[55] Matthew Mirman,et al. Differentiable Abstract Interpretation for Provably Robust Neural Networks , 2018, ICML.
[56] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[57] Cho-Jui Hsieh,et al. Efficient Neural Network Robustness Certification with General Activation Functions , 2018, NeurIPS.
[58] Masashi Sugiyama,et al. Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks , 2018, NeurIPS.
[59] Cho-Jui Hsieh,et al. A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks , 2019, NeurIPS.
[60] Cho-Jui Hsieh,et al. Towards Stable and Efficient Training of Verifiably Robust Neural Networks , 2019, ICLR.
[61] Inderjit S. Dhillon,et al. Towards Fast Computation of Certified Robustness for ReLU Networks , 2018, ICML.