Learning to Defend by Learning to Attack
暂无分享,去创建一个
Tuo Zhao | Zhehui Chen | Haoming Jiang | Yuyang Shi | Bo Dai | T. Zhao | Haoming Jiang | Zhehui Chen | Yuyang Shi | Bo Dai
[1] Patrice Marcotte,et al. An overview of bilevel optimization , 2007, Ann. Oper. Res..
[2] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[3] Sanjay Mehrotra,et al. Distributionally Robust Optimization: A Review , 2019, ArXiv.
[4] Tong Zhang,et al. NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks , 2019, ICML.
[5] Ming Yang,et al. DeepFace: Closing the Gap to Human-Level Performance in Face Verification , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[6] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[7] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[8] Trevor Darrell,et al. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[9] Wojciech Zaremba,et al. OpenAI Gym , 2016, ArXiv.
[10] Luca Antiga,et al. Automatic differentiation in PyTorch , 2017 .
[11] Mingyan Liu,et al. Generating Adversarial Examples with Adversarial Networks , 2018, IJCAI.
[12] Stefano Ermon,et al. Generative Adversarial Imitation Learning , 2016, NIPS.
[13] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[14] Sergey Levine,et al. Trust Region Policy Optimization , 2015, ICML.
[15] Eduardo Valle,et al. Exploring the space of adversarial images , 2015, 2016 International Joint Conference on Neural Networks (IJCNN).
[16] Le Song,et al. Deep Hyperspherical Learning , 2017, NIPS.
[17] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[18] Ian S. Fischer,et al. Adversarial Transformation Networks: Learning to Generate Adversarial Examples , 2017, ArXiv.
[19] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[20] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[21] Tuo Zhao,et al. Toward Deeper Understanding of Nonconvex Stochastic Optimization with Momentum using Diffusion Approximations , 2018, ArXiv.
[22] J. Schmidhuber,et al. A neural network that embeds its own meta-levels , 1993, IEEE International Conference on Neural Networks.
[23] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[24] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Wei Xu,et al. Adversarial Interpolation Training: A Simple Approach for Improving Model Robustness , 2019 .
[26] Le Song,et al. Learning from Conditional Distributions via Dual Embeddings , 2016, AISTATS.
[27] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[28] Jun Zhu,et al. Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[29] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[30] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[31] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[32] Sepp Hochreiter,et al. Meta-learning with backpropagation , 2001, IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222).
[33] Rama Chellappa,et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models , 2018, ICLR.
[34] Jürgen Schmidhuber,et al. Learning to Control Fast-Weight Memories: An Alternative to Dynamic Recurrent Networks , 1992, Neural Computation.
[35] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[36] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[37] Yang Song,et al. Improving the Robustness of Deep Neural Networks via Stability Training , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[38] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[39] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[40] Marcin Andrychowicz,et al. Learning to learn by gradient descent by gradient descent , 2016, NIPS.
[41] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[42] Sepp Hochreiter,et al. Learning to Learn Using Gradient Descent , 2001, ICANN.
[43] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[44] A. Kleywegt,et al. Distributionally Robust Stochastic Optimization with Wasserstein Distance , 2016, Math. Oper. Res..
[45] Anders Krogh,et al. A Simple Weight Decay Can Improve Generalization , 1991, NIPS.