暂无分享,去创建一个
[1] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[2] Matthias Bethge,et al. A Simple Way to Make Neural Networks Robust Against Diverse Image Corruptions , 2020, ECCV.
[3] Lina J. Karam,et al. A Study and Comparison of Human and Deep Learning Recognition Performance under Visual Distortions , 2017, 2017 26th International Conference on Computer Communication and Networks (ICCCN).
[4] Jonathan Krause,et al. 3D Object Representations for Fine-Grained Categorization , 2013, 2013 IEEE International Conference on Computer Vision Workshops.
[5] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Seyed-Mohsen Moosavi-Dezfooli,et al. Geometric Robustness of Deep Networks: Analysis and Improvement , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[7] Christoph H. Lampert,et al. Zero-Shot Learning—A Comprehensive Evaluation of the Good, the Bad and the Ugly , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[8] Kun He,et al. Robust Local Features for Improving the Generalization of Adversarial Training , 2020, ICLR.
[9] Bo Zhao,et al. A Large-Scale Attribute Dataset for Zero-Shot Learning , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[10] Jinfeng Yi,et al. Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models , 2018, ECCV.
[11] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[12] Pietro Perona,et al. Caltech-UCSD Birds 200 , 2010 .
[13] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[14] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[15] Inderjit S. Dhillon,et al. The Limitations of Adversarial Training and the Blind-Spot Attack , 2019, ICLR.
[16] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2018, ICLR.
[17] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[18] Zhanxing Zhu,et al. Interpreting Adversarially Trained Convolutional Neural Networks , 2019, ICML.
[19] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[20] Aleksander Madry,et al. Exploring the Landscape of Spatial Robustness , 2017, ICML.
[21] Yi Sun,et al. Testing Robustness Against Unforeseen Adversaries , 2019, ArXiv.
[22] Nic Ford,et al. Adversarial Examples Are a Natural Consequence of Test Error in Noise , 2019, ICML.
[23] Pascal Frossard,et al. Manitest: Are classifiers really invariant? , 2015, BMVC.
[24] Matthias Bethge,et al. Increasing the robustness of DNNs against image corruptions by playing the Game of Noise , 2020, ArXiv.
[25] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[26] Alice Caplier,et al. Are Adversarial Robustness and Common Perturbation Robustness Independent Attributes ? , 2019, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW).