暂无分享,去创建一个
Hang Su | Yinpeng Dong | Zhijie Deng | Jun Zhu | Tianyu Pang | Jun Zhu | Hang Su | Tianyu Pang | Zhijie Deng | Yinpeng Dong
[1] Bo Dai,et al. Learning to Defense by Learning to Attack , 2018, DGS@ICLR.
[2] Mingyan Liu,et al. Generating Adversarial Examples with Adversarial Networks , 2018, IJCAI.
[3] Jun Zhu,et al. Kernel Implicit Variational Inference , 2017, ICLR.
[4] Greg Yang,et al. Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers , 2019, NeurIPS.
[5] Chun-Nam Yu,et al. A Direct Approach to Robust Deep Learning Using Adversarial Networks , 2019, ICLR.
[6] Alan L. Yuille,et al. Feature Denoising for Improving Adversarial Robustness , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[7] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[8] Seong Joon Oh,et al. Adversarial Image Perturbation for Privacy Protection A Game Theory Perspective , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[9] Julien Cornebise,et al. Weight Uncertainty in Neural Network , 2015, ICML.
[10] Aleksander Madry,et al. Exploring the Landscape of Spatial Robustness , 2017, ICML.
[11] Ning Chen,et al. Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness , 2019, ICLR.
[12] J. Zico Kolter,et al. Overfitting in adversarially robust deep learning , 2020, ICML.
[13] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[14] Jun Zhu,et al. Towards Robust Detection of Adversarial Examples , 2017, NeurIPS.
[15] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[16] Daniel Kuhn,et al. Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations , 2015, Mathematical Programming.
[17] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[18] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[19] Geoffrey E. Hinton,et al. Speech recognition with deep recurrent neural networks , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[20] I. Jolliffe. Principal Components in Regression Analysis , 1986 .
[21] Kimin Lee,et al. Using Pre-Training Can Improve Model Robustness and Uncertainty , 2019, ICML.
[22] Aleksander Madry,et al. On Adaptive Attacks to Adversarial Example Defenses , 2020, NeurIPS.
[23] Nic Ford,et al. Adversarial Examples Are a Natural Consequence of Test Error in Noise , 2019, ICML.
[24] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[25] Seunghoon Hong,et al. Adversarial Defense via Learning to Generate Diverse Attacks , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[26] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[27] Ning Chen,et al. Improving Adversarial Robustness via Promoting Ensemble Diversity , 2019, ICML.
[28] Hang Su,et al. Towards Privacy Protection by Generating Adversarial Identity Masks , 2020, ArXiv.
[29] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[30] Ludwig Schmidt,et al. Unlabeled Data Improves Adversarial Robustness , 2019, NeurIPS.
[31] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[32] M. Staib,et al. Distributionally Robust Deep Learning as a Generalization of Adversarial Training , 2017 .
[33] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[34] Pushmeet Kohli,et al. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks , 2018, ICML.
[35] Ian S. Fischer,et al. Adversarial Transformation Networks: Learning to Generate Adversarial Examples , 2017, ArXiv.
[36] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[37] Bin Dong,et al. You Only Propagate Once: Painless Adversarial Training Using Maximal Principle , 2019 .
[38] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[39] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[40] Alexei A. Efros,et al. Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[41] Max Welling,et al. Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors , 2016, ICML.
[42] Logan Engstrom,et al. Black-box Adversarial Attacks with Limited Queries and Information , 2018, ICML.
[43] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[44] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[45] Stefano Ermon,et al. Output Diversified Initialization for Adversarial Attacks , 2020, ArXiv.
[46] John C. Duchi,et al. Certifying Some Distributional Robustness with Principled Adversarial Training , 2017, ICLR.
[47] Po-Sen Huang,et al. Are Labels Required for Improving Adversarial Robustness? , 2019, NeurIPS.
[48] Kun He,et al. Improving the Generalization of Adversarial Training with Domain Adaptation , 2018, ICLR.
[49] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[50] Tong Zhang,et al. NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks , 2019, ICML.
[51] Harini Kannan,et al. Adversarial Logit Pairing , 2018, NIPS 2018.
[52] Anja De Waegenaere,et al. Robust Solutions of Optimization Problems Affected by Uncertain Probabilities , 2011, Manag. Sci..
[53] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[54] Peilin Zhong,et al. Enhancing Adversarial Defense by k-Winners-Take-All , 2020, ICLR.
[55] Xiaolin Hu,et al. Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[56] Jun Zhu,et al. A Spectral Approach to Gradient Estimation for Implicit Distributions , 2018, ICML.
[57] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[58] Li Fei-Fei,et al. Perceptual Losses for Real-Time Style Transfer and Super-Resolution , 2016, ECCV.
[59] J. Zico Kolter,et al. Fast is better than free: Revisiting adversarial training , 2020, ICLR.
[60] Philip Bachman,et al. Calibrating Energy-based Generative Adversarial Networks , 2017, ICLR.
[61] J. Danskin. The Theory of Max-Min and its Application to Weapons Allocation Problems , 1967 .
[62] Stefano Ermon,et al. Diversity can be Transferred: Output Diversification for White- and Black-box Attacks , 2020, NeurIPS.
[63] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[64] Julien Cornebise,et al. Weight Uncertainty in Neural Networks , 2015, ArXiv.
[65] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[66] Isay Katsman,et al. Generative Adversarial Perturbations , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[67] Baishakhi Ray,et al. Metric Learning for Adversarial Robustness , 2019, NeurIPS.
[68] Jun Zhu,et al. Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[69] Fan Yang,et al. Good Semi-supervised Learning That Requires a Bad GAN , 2017, NIPS.
[70] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2018, ICLR.
[71] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[72] 拓海 杉山,et al. “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .
[73] Pushmeet Kohli,et al. Adversarial Robustness through Local Linearization , 2019, NeurIPS.
[74] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[75] Sergey Levine,et al. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor , 2018, ICML.
[76] Jun Zhu,et al. Improving Black-box Adversarial Attacks with a Transfer-based Prior , 2019, NeurIPS.
[77] Di He,et al. Adversarially Robust Generalization Just Requires More Unlabeled Data , 2019, ArXiv.
[78] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[79] Dan Boneh,et al. Adversarial Training and Robustness for Multiple Perturbations , 2019, NeurIPS.
[80] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[81] Larry S. Davis,et al. Adversarial Training for Free! , 2019, NeurIPS.
[82] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[83] Haichao Zhang,et al. Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training , 2019, NeurIPS.
[84] Hang Su,et al. Benchmarking Adversarial Robustness on Image Classification , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[85] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[86] Max Welling,et al. Multiplicative Normalizing Flows for Variational Bayesian Neural Networks , 2017, ICML.