暂无分享,去创建一个
Dario Zanca | Leo Schwinn | Ren'e Raab | Bjoern Eskofier | An Nguyen | B. Eskofier | Dario Zanca | Leo Schwinn | A. Nguyen | René Raab
[1] Matthias Hein,et al. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks , 2020, ICML.
[2] Charles Jin,et al. Manifold Regularization for Adversarial Robustness , 2020, ArXiv.
[3] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[4] Timothy A. Mann,et al. Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples , 2020, ArXiv.
[5] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[6] John X. Morris,et al. TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP , 2020, EMNLP.
[7] Martin Burger,et al. Identifying Untrustworthy Predictions in Neural Networks by Geometric Gradient Analysis , 2021, UAI.
[8] Prateek Mittal,et al. RobustBench: a standardized adversarial robustness benchmark , 2020, ArXiv.
[9] Pushmeet Kohli,et al. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks , 2018, ICML.
[10] Hang Su,et al. Boosting Adversarial Training with Hypersphere Embedding , 2020, NeurIPS.
[11] J. Zico Kolter,et al. Overfitting in adversarially robust deep learning , 2020, ICML.
[12] Hongyang R. Zhang,et al. Self-Adaptive Training: beyond Empirical Risk Minimization , 2020, NeurIPS.
[13] Martin Burger,et al. Sampled Nonlocal Gradients for Stronger Adversarial Attacks , 2020, ArXiv.
[14] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[15] Bin Dong,et al. You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle , 2019, NeurIPS.
[16] Ruitong Huang,et al. Max-Margin Adversarial (MMA) Training: Direct Input Space Margin Maximization through Adversarial Training , 2018, ICLR.
[17] Aleksander Madry,et al. Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.
[18] Dario Zanca,et al. Dynamically Sampled Nonlocal Gradients for Stronger Adversarial Attacks , 2021, 2021 International Joint Conference on Neural Networks (IJCNN).
[19] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[20] Aleksander Madry,et al. On Adaptive Attacks to Adversarial Example Defenses , 2020, NeurIPS.
[21] James Bailey,et al. Improving Adversarial Robustness Requires Revisiting Misclassified Examples , 2020, ICLR.
[22] Mohan S. Kankanhalli,et al. Attacks Which Do Not Kill Training Make Adversarial Learning Stronger , 2020, ICML.
[23] J. Zico Kolter,et al. Fast is better than free: Revisiting adversarial training , 2020, ICLR.
[24] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[25] Ki-Woong Park,et al. Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network , 2018, IEEE Access.
[26] Kimin Lee,et al. Using Pre-Training Can Improve Model Robustness and Uncertainty , 2019, ICML.
[27] Ludwig Schmidt,et al. Unlabeled Data Improves Adversarial Robustness , 2019, NeurIPS.
[28] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[29] Suman Jana,et al. HYDRA: Pruning Adversarially Robust Neural Networks , 2020, NeurIPS.
[30] Kun He,et al. Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks , 2019, ICLR.
[31] Ling Shao,et al. Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[32] Colin Raffel,et al. Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition , 2019, ICML.
[33] Yisen Wang,et al. Adversarial Weight Perturbation Helps Robust Generalization , 2020, NeurIPS.