暂无分享,去创建一个
[1] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[2] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Valentina Zantedeschi,et al. Efficient Defenses Against Adversarial Attacks , 2017, AISec@CCS.
[4] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[5] Lawrence Carin,et al. Second-Order Adversarial Attack and Certifiable Robustness , 2018, ArXiv.
[6] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[7] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[8] Tolga Tasdizen,et al. Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning , 2016, NIPS.
[9] Bin Dong,et al. Enhancing Certified Robustness of Smoothed Classifiers via Weighted Model Ensembling , 2020, ArXiv.
[10] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[11] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[12] Vinod Vaikuntanathan,et al. Computational Limitations in Robust Classification and Win-Win Results , 2019, IACR Cryptol. ePrint Arch..
[13] Zhun Deng,et al. How Does Mixup Help With Robustness and Generalization? , 2020, ArXiv.
[14] Amir Najafi,et al. Robustness to Adversarial Perturbations in Learning from Incomplete Data , 2019, NeurIPS.
[15] Frank Hutter,et al. SGDR: Stochastic Gradient Descent with Warm Restarts , 2016, ICLR.
[16] T. Goldstein,et al. Certified Defenses for Adversarial Patches , 2020, ICLR.
[17] Hangfeng He,et al. Towards Understanding the Dynamics of the First-Order Adversaries , 2020, ICML.
[18] Timo Aila,et al. Temporal Ensembling for Semi-Supervised Learning , 2016, ICLR.
[19] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Ilya P. Razenshteyn,et al. Adversarial examples from computational constraints , 2018, ICML.
[21] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[22] Hongzhe Li,et al. Transfer learning for high‐dimensional linear regression: Prediction, estimation and minimax optimality , 2020, Journal of the Royal Statistical Society. Series B, Statistical methodology.
[23] Pedro M. Domingos,et al. Adversarial classification , 2004, KDD.
[24] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[25] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[26] Yang Song,et al. Improving the Robustness of Deep Neural Networks via Stability Training , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[28] Suman Jana,et al. Certified Robustness to Adversarial Examples with Differential Privacy , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[29] Kimin Lee,et al. Using Pre-Training Can Improve Model Robustness and Uncertainty , 2019, ICML.
[30] Ludwig Schmidt,et al. Unlabeled Data Improves Adversarial Robustness , 2019, NeurIPS.
[31] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[32] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[33] Di He,et al. Adversarially Robust Generalization Just Requires More Unlabeled Data , 2019, ArXiv.
[34] Cynthia Dwork,et al. Interpreting Robust Optimization via Adversarial Influence Functions , 2020, ICML.
[35] Amos J. Storkey,et al. School of Informatics, University of Edinburgh , 2022 .
[36] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[37] Christopher Meek,et al. Adversarial learning , 2005, KDD '05.
[38] Po-Sen Huang,et al. Are Labels Required for Improving Adversarial Robustness? , 2019, NeurIPS.
[39] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[40] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[41] Jing Ma,et al. CHIME: Clustering of high-dimensional Gaussian mixtures with EM algorithm and its optimality , 2019, The Annals of Statistics.
[42] Aleksander Madry,et al. Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.
[43] Moritz Hardt,et al. Tight Bounds for Learning a Mixture of Two Gaussians , 2014, STOC.
[44] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[45] Fabio Roli,et al. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2018, CCS.
[46] Moustapha Cissé,et al. Countering Adversarial Images using Input Transformations , 2018, ICLR.