暂无分享,去创建一个
Tomoharu Iwata | Atsutoshi Kumagai | Sekitoshi Kanai | Masanori Yamada | Yuki Yamanaka | Hiroshi Takahashi | Tomokatsu Takahashi | Tomoharu Iwata | Sekitoshi Kanai | Atsutoshi Kumagai | Masanori Yamada | Yuki Yamanaka | Hiroshi Takahashi | Tomokatsu Takahashi
[1] Matthias Hein,et al. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks , 2020, ICML.
[2] Stefano Soatto,et al. Entropy-SGD: biasing gradient descent into wide valleys , 2016, ICLR.
[3] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[4] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[5] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[6] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[7] Yi Wang,et al. Model-Agnostic Adversarial Detection by Random Perturbations , 2019, IJCAI.
[8] Hossein Mobahi,et al. Sharpness-Aware Minimization for Efficiently Improving Generalization , 2020, ArXiv.
[9] Zhiheng Huang,et al. Residual Convolutional CTC Networks for Automatic Speech Recognition , 2017, ArXiv.
[10] Hossein Mobahi,et al. Fantastic Generalization Measures and Where to Find Them , 2019, ICLR.
[11] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[12] M. V. Rossum,et al. In Neural Computation , 2022 .
[13] J. Zico Kolter,et al. Overfitting in adversarially robust deep learning , 2020, ICML.
[14] Tao Yu,et al. A New Defense Against Adversarial Images: Turning a Weakness into a Strength , 2019, NeurIPS.
[15] Yisen Wang,et al. Adversarial Weight Perturbation Helps Robust Generalization , 2020, NeurIPS.
[16] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[17] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[18] Hao Li,et al. Visualizing the Loss Landscape of Neural Nets , 2017, NeurIPS.
[19] Tao Lin,et al. On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them , 2020, NeurIPS.
[20] Vinay Uday Prabhu,et al. Understanding Adversarial Robustness Through Loss Landscape Geometries , 2019, ArXiv.
[21] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[22] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[23] Jorge Nocedal,et al. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima , 2016, ICLR.
[24] Dongdong Hou,et al. Detection Based Defense Against Adversarial Examples From the Steganalysis Point of View , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Greg Yang,et al. Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers , 2019, NeurIPS.
[26] James Bailey,et al. Improving Adversarial Robustness Requires Revisiting Misclassified Examples , 2020, ICLR.
[27] Razvan Pascanu,et al. Sharp Minima Can Generalize For Deep Nets , 2017, ICML.
[28] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[29] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[30] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .