Smoothness Analysis of Adversarial Training
暂无分享,去创建一个
Yasutoshi Ida | Sekitoshi Kanai | Masanori Yamada | Yuki Yamanaka | Hiroshi Takahashi | Yasutoshi Ida | Sekitoshi Kanai | Masanori Yamada | Yuki Yamanaka | Hiroshi Takahashi
[1] Woojin Lee,et al. Understanding Catastrophic Overfitting in Single-step Adversarial Training , 2020, AAAI.
[2] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[3] Matthias Hein,et al. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks , 2020, ICML.
[4] Razvan Pascanu,et al. Sharp Minima Can Generalize For Deep Nets , 2017, ICML.
[5] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[6] Eric Jones,et al. SciPy: Open Source Scientific Tools for Python , 2001 .
[7] Tao Lin,et al. On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them , 2020, NeurIPS.
[8] Yoshua Bengio,et al. Three Factors Influencing Minima in SGD , 2017, ArXiv.
[9] Ludwig Schmidt,et al. Unlabeled Data Improves Adversarial Robustness , 2019, NeurIPS.
[10] Stefano Soatto,et al. Entropy-SGD: biasing gradient descent into wide valleys , 2016, ICLR.
[11] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[12] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[13] Richard Socher,et al. Improving Generalization Performance by Switching from Adam to SGD , 2017, ArXiv.
[14] Jorge Nocedal,et al. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima , 2016, ICLR.
[15] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[16] Hao Li,et al. Visualizing the Loss Landscape of Neural Nets , 2017, NeurIPS.
[17] Hossein Mobahi,et al. Sharpness-Aware Minimization for Efficiently Improving Generalization , 2020, ArXiv.
[18] Moustapha Cissé,et al. Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.
[19] Yoram Singer,et al. Train faster, generalize better: Stability of stochastic gradient descent , 2015, ICML.
[20] K. Schittkowski,et al. NONLINEAR PROGRAMMING , 2022 .
[21] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[22] Logan Engstrom,et al. Evaluating and Understanding the Robustness of Adversarial Logit Pairing , 2018, ArXiv.
[23] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Nathan Srebro,et al. Exploring Generalization in Deep Learning , 2017, NIPS.