暂无分享,去创建一个
Jun Zhu | Yinpeng Dong | Zhijie Deng | Tianyu Pang | Hang Su | Ke Xu | Xiao Yang
[1] Matthias Hein,et al. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks , 2020, ICML.
[2] Ilya P. Razenshteyn,et al. Adversarial examples from computational constraints , 2018, ICML.
[3] Ibrahim M. Alabdulmohsin,et al. What Do Neural Networks Learn When Trained With Random Labels? , 2020, NeurIPS.
[4] Hang Su,et al. Bag of Tricks for Adversarial Training , 2020, ICLR.
[5] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[6] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[7] Li Fei-Fei,et al. MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels , 2017, ICML.
[8] Moustapha Cissé,et al. Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.
[9] Alan L. Yuille,et al. Feature Denoising for Improving Adversarial Robustness , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Ryota Tomioka,et al. Norm-Based Capacity Control in Neural Networks , 2015, COLT.
[11] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[12] Ning Chen,et al. Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness , 2019, ICLR.
[13] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[14] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[15] M. Kendall. A NEW MEASURE OF RANK CORRELATION , 1938 .
[16] Hossein Mobahi,et al. Fantastic Generalization Measures and Where to Find Them , 2019, ICLR.
[17] Jun Zhu,et al. Adversarial Distributional Training for Robust Deep Learning , 2020, NeurIPS.
[18] Nagarajan Natarajan,et al. Learning with Noisy Labels , 2013, NIPS.
[19] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[20] Yisen Wang,et al. Adversarial Weight Perturbation Helps Robust Generalization , 2020, NeurIPS.
[21] Timo Aila,et al. Temporal Ensembling for Semi-Supervised Learning , 2016, ICLR.
[22] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[23] Di He,et al. Adversarially Robust Generalization Just Requires More Unlabeled Data , 2019, ArXiv.
[24] Richard Nock,et al. Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Cho-Jui Hsieh,et al. Convergence of Adversarial Training in Overparametrized Neural Networks , 2019, NeurIPS.
[26] Hongyang R. Zhang,et al. Self-Adaptive Training: beyond Empirical Risk Minimization , 2020, NeurIPS.
[27] Kimin Lee,et al. Using Pre-Training Can Improve Model Robustness and Uncertainty , 2019, ICML.
[28] James Bailey,et al. On the Convergence and Robustness of Adversarial Training , 2021, ICML.
[29] Ludwig Schmidt,et al. Unlabeled Data Improves Adversarial Robustness , 2019, NeurIPS.
[30] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[31] Mert R. Sabuncu,et al. Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels , 2018, NeurIPS.
[32] Zhao Song,et al. Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality , 2020, NeurIPS.
[33] Pin-Yu Chen,et al. CAT: Customized Adversarial Training for Improved Robustness , 2020, IJCAI.
[34] Vitaly Feldman,et al. What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation , 2020, NeurIPS.
[35] Bernt Schiele,et al. Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks , 2019, ICML.
[36] Kannan Ramchandran,et al. Rademacher Complexity for Adversarially Robust Generalization , 2018, ICML.
[37] Yuanzhi Li,et al. A Convergence Theory for Deep Learning via Over-Parameterization , 2018, ICML.
[38] Liwei Wang,et al. Gradient Descent Finds Global Minima of Deep Neural Networks , 2018, ICML.
[39] Xingrui Yu,et al. Co-teaching: Robust training of deep neural networks with extremely noisy labels , 2018, NeurIPS.
[40] James Bailey,et al. Improving Adversarial Robustness Requires Revisiting Misclassified Examples , 2020, ICLR.
[41] Po-Sen Huang,et al. Are Labels Required for Improving Adversarial Robustness? , 2019, NeurIPS.
[42] Yi Zhang,et al. Stronger generalization bounds for deep nets via a compression approach , 2018, ICML.
[43] J. Zico Kolter,et al. Fast is better than free: Revisiting adversarial training , 2020, ICLR.
[44] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[45] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[46] Y. Nesterov. A method for solving the convex programming problem with convergence rate O(1/k^2) , 1983 .
[47] Larry S. Davis,et al. Adversarial Training for Free! , 2019, NeurIPS.
[48] Pushmeet Kohli,et al. Adversarial Robustness through Local Linearization , 2019, NeurIPS.
[49] Baishakhi Ray,et al. Metric Learning for Adversarial Robustness , 2019, NeurIPS.
[50] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[51] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[52] Matthias Hein,et al. Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack , 2019, ICML.
[53] Timothy A. Mann,et al. Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples , 2020, ArXiv.
[54] Varun Kanade,et al. How benign is benign overfitting? , 2020, ICLR.
[55] Xiaolin Hu,et al. Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[56] Bin Yang,et al. Learning to Reweight Examples for Robust Deep Learning , 2018, ICML.
[57] Trevor Darrell,et al. Deep Layer Aggregation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[58] Jascha Sohl-Dickstein,et al. Sensitivity and Generalization in Neural Networks: an Empirical Study , 2018, ICLR.
[59] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[60] Matus Telgarsky,et al. Spectrally-normalized margin bounds for neural networks , 2017, NIPS.
[61] Aleksander Madry,et al. Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.
[62] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[63] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[64] Seyed-Mohsen Moosavi-Dezfooli,et al. Robustness via Curvature Regularization, and Vice Versa , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[65] Mikhail Belkin,et al. Reconciling modern machine-learning practice and the classical bias–variance trade-off , 2018, Proceedings of the National Academy of Sciences.
[66] Alan Yuille,et al. Intriguing properties of adversarial training , 2019, ICLR.
[67] Nathan Srebro,et al. Exploring Generalization in Deep Learning , 2017, NIPS.
[68] Ning Chen,et al. Improving Adversarial Robustness via Promoting Ensemble Diversity , 2019, ICML.
[69] Yoshua Bengio,et al. A Closer Look at Memorization in Deep Networks , 2017, ICML.
[70] Nicolas Flammarion,et al. Square Attack: a query-efficient black-box adversarial attack via random search , 2020, ECCV.
[71] Dacheng Tao,et al. Theoretical Analysis of Adversarial Learning: A Minimax Approach , 2018, NeurIPS.
[72] Bin Dong,et al. You Only Propagate Once: Painless Adversarial Training Using Maximal Principle , 2019 .
[73] J. Zico Kolter,et al. Overfitting in adversarially robust deep learning , 2020, ICML.
[74] Vitaly Feldman,et al. Does learning require memorization? a short tale about a long tail , 2019, STOC.
[75] Hao Li,et al. Visualizing the Loss Landscape of Neural Nets , 2017, NeurIPS.
[76] Shiyu Chang,et al. Robust Overfitting may be mitigated by properly learned smoothening , 2021, ICLR.
[77] Tao Lin,et al. On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them , 2020, NeurIPS.
[78] Hang Su,et al. Benchmarking Adversarial Robustness on Image Classification , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[79] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.