Completing the Picture: Randomized Smoothing Suffers from the Curse of Dimensionality for a Large Family of Distributions
暂无分享,去创建一个
Stephan Günnemann | Aleksandar Bojchevski | Yihan Wu | Aleksei Kuvshinov | Aleksandar Bojchevski | Stephan Günnemann | Yihan Wu | A. Kuvshinov
[1] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[2] Stephan Günnemann,et al. Efficient Robustness Certificates for Discrete Data: Sparsity-Aware Randomized Smoothing for Graphs, Images and More , 2020, ICML.
[3] S. Li. Concise Formulas for the Area and Volume of a Hyperspherical Cap , 2011 .
[4] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[5] Tom Goldstein,et al. Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness , 2020, ICML.
[6] Cyrus Rashtchian,et al. A Closer Look at Accuracy vs. Robustness , 2020, NeurIPS.
[7] Qiang Liu,et al. Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework , 2020, NeurIPS.
[8] Jiliang Tang,et al. Adversarial Attacks and Defenses in Images, Graphs and Text: A Review , 2019, International Journal of Automation and Computing.
[9] Ilya P. Razenshteyn,et al. Randomized Smoothing of All Shapes and Sizes , 2020, ICML.
[10] Ajmal Mian,et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.
[11] Pushmeet Kohli,et al. A Framework for robustness Certification of Smoothed Classifiers using F-Divergences , 2020, ICLR.
[12] Suman Jana,et al. Certified Robustness to Adversarial Examples with Differential Privacy , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[13] Russ Tedrake,et al. Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.
[14] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[15] Cho-Jui Hsieh,et al. Towards Stable and Efficient Training of Verifiably Robust Neural Networks , 2019, ICLR.
[16] Jamie Hayes,et al. Extensions and limitations of randomized smoothing for robustness guarantees , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[17] Avrim Blum,et al. Random Smoothing Might be Unable to Certify 𝓁∞ Robustness for High-Dimensional Images , 2020, J. Mach. Learn. Res..
[18] Mingjie Sun,et al. Denoised Smoothing: A Provable Defense for Pretrained Classifiers , 2020, NeurIPS.
[19] Jinwoo Shin,et al. Consistency Regularization for Certified Robustness of Smoothed Classifiers , 2020, NeurIPS.
[20] Cho-Jui Hsieh,et al. RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications , 2018, AAAI.
[21] Di Wang,et al. Towards Assessment of Randomized Mechanisms for Certifying Adversarial Robustness , 2020, ArXiv.
[22] Tommi S. Jaakkola,et al. Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers , 2019, NeurIPS.
[23] E. S. Pearson,et al. THE USE OF CONFIDENCE OR FIDUCIAL LIMITS ILLUSTRATED IN THE CASE OF THE BINOMIAL , 1934 .
[24] Aleksander Madry,et al. On Adaptive Attacks to Adversarial Example Defenses , 2020, NeurIPS.