暂无分享,去创建一个
Timothy A. Mann | Sven Gowal | Olivia Wiles | Sylvestre-Alvise Rebuffi | Timothy Mann | Dan A. Calian | Florian Stimberg | Florian Stimberg | Sven Gowal | Sylvestre-Alvise Rebuffi | D. A. Calian | Olivia Wiles
[1] Boris Polyak. Some methods of speeding up the convergence of iteration methods , 1964 .
[2] Y. Nesterov. A method for solving the convex programming problem with convergence rate O(1/k^2) , 1983 .
[3] Antonio Torralba,et al. Ieee Transactions on Pattern Analysis and Machine Intelligence 1 80 Million Tiny Images: a Large Dataset for Non-parametric Object and Scene Recognition , 2022 .
[4] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[5] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[6] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[7] Kevin Gimpel,et al. Gaussian Error Linear Units (GELUs) , 2016 .
[8] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[9] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[11] Kaiming He,et al. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour , 2017, ArXiv.
[12] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[13] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[14] Frank Hutter,et al. SGDR: Stochastic Gradient Descent with Warm Restarts , 2016, ICLR.
[15] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[16] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[17] Graham W. Taylor,et al. Improved Regularization of Convolutional Neural Networks with Cutout , 2017, ArXiv.
[18] Holger Ulmer,et al. Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2017, ArXiv.
[19] Takashi Matsubara,et al. RICAP: Random Image Cropping and Patching Data Augmentation for Deep CNNs , 2018, ACML.
[20] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[21] Pushmeet Kohli,et al. Strength in Numbers: Trading-off Robustness and Computation via Adversarially-Trained Ensembles , 2018, ArXiv.
[22] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[23] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[24] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[25] Aleksander Madry,et al. Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.
[26] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[27] Jun Zhu,et al. Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[28] Matthias Hein,et al. Logit Pairing Methods Can Fool Gradient-Based Attacks , 2018, ArXiv.
[29] Andrew Gordon Wilson,et al. Averaging Weights Leads to Wider Optima and Better Generalization , 2018, UAI.
[30] Alexei A. Efros,et al. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[31] Pushmeet Kohli,et al. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks , 2018, ICML.
[32] Andrew Gordon Wilson,et al. Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs , 2018, NeurIPS.
[33] Quoc V. Le,et al. AutoAugment: Learning Augmentation Policies from Data , 2018, ArXiv.
[34] Po-Sen Huang,et al. An Alternative Surrogate Loss for PGD-based Adversarial Testing , 2019, ArXiv.
[35] Jeff Donahue,et al. Large Scale GAN Training for High Fidelity Natural Image Synthesis , 2018, ICLR.
[36] Amir Najafi,et al. Robustness to Adversarial Perturbations in Learning from Incomplete Data , 2019, NeurIPS.
[37] Alan L. Yuille,et al. Feature Denoising for Improving Adversarial Robustness , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[38] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2018, ICLR.
[39] Ning Chen,et al. Improving Adversarial Robustness via Promoting Ensemble Diversity , 2019, ICML.
[40] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[41] Po-Sen Huang,et al. Are Labels Required for Improving Adversarial Robustness? , 2019, NeurIPS.
[42] Kimin Lee,et al. Using Pre-Training Can Improve Model Robustness and Uncertainty , 2019, ICML.
[43] Ludwig Schmidt,et al. Unlabeled Data Improves Adversarial Robustness , 2019, NeurIPS.
[44] Suman V. Ravuri,et al. Classification Accuracy Score for Conditional Generative Models , 2019, NeurIPS.
[45] Pushmeet Kohli,et al. Adversarial Robustness through Local Linearization , 2019, NeurIPS.
[46] Seong Joon Oh,et al. CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[47] Matthias Hein,et al. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks , 2020, ICML.
[48] Timothy A. Mann,et al. Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples , 2020, ArXiv.
[49] Pieter Abbeel,et al. Denoising Diffusion Probabilistic Models , 2020, NeurIPS.
[50] Nicolas Flammarion,et al. RobustBench: a standardized adversarial robustness benchmark , 2020, NeurIPS Datasets and Benchmarks.
[51] J. Z. Kolter,et al. Overfitting in adversarially robust deep learning , 2020, ICML.
[52] Yisen Wang,et al. Adversarial Weight Perturbation Helps Robust Generalization , 2020, NeurIPS.
[53] Chao Zhang,et al. Self-Adaptive Training: beyond Empirical Risk Minimization , 2020, NeurIPS.
[54] Jinwoo Shin,et al. Learning to Generate Noise for Robustness against Multiple Perturbations , 2020, ArXiv.
[55] Quoc V. Le,et al. Randaugment: Practical automated data augmentation with a reduced search space , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[56] C'eline Hudelot,et al. Controlling generative models with continuous factors of variations , 2020, ICLR.
[57] Nicolas Flammarion,et al. Square Attack: a query-efficient black-box adversarial attack via random search , 2019, ECCV.
[58] Matthias Hein,et al. Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack , 2019, ICML.
[59] Phillip Isola,et al. On the "steerability" of generative adversarial networks , 2019, ICLR.
[60] Hang Su,et al. Boosting Adversarial Training with Hypersphere Embedding , 2020, NeurIPS.
[61] GANSpace: Discovering Interpretable GAN Controls , 2020, NeurIPS.
[62] Timothy A. Mann,et al. Achieving Robustness in the Wild via Adversarial Mixing With Disentangled Representations , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[63] D. Song,et al. The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization , 2020, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[64] Rewon Child,et al. Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images , 2020, ICLR.
[65] Jiaya Jia,et al. Learnable Boundary Guided Adversarial Training , 2020, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[66] J. Zico Kolter,et al. Learning perturbation sets for robust machine learning , 2020, ICLR.
[67] Sven Gowal,et al. Improving Robustness using Generated Data , 2021, NeurIPS.
[68] Shiyu Chang,et al. Robust Overfitting may be mitigated by properly learned smoothening , 2021, ICLR.
[69] Sven Gowal,et al. Data Augmentation Can Improve Robustness , 2021, NeurIPS.