暂无分享,去创建一个
[1] Nicholas Carlini,et al. Unrestricted Adversarial Examples , 2018, ArXiv.
[2] Pin-Yu Chen,et al. Attacking the Madry Defense Model with L1-based Adversarial Examples , 2017, ICLR.
[3] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[4] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[5] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[6] Leon A. Gatys,et al. Understanding Low- and High-Level Contributions to Fixation Prediction , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[7] Atul Prakash,et al. Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[8] Aleksander Madry,et al. Noise or Signal: The Role of Image Backgrounds in Object Recognition , 2020, ICLR.
[9] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[10] A. Treisman,et al. A feature-integration theory of attention , 1980, Cognitive Psychology.
[11] Tal Grinshpoun,et al. Heat and Blur: An Effective and Fast Defense Against Adversarial Examples , 2020, 2003.07573.
[12] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[14] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[15] Honglak Lee,et al. An Analysis of Single-Layer Networks in Unsupervised Feature Learning , 2011, AISTATS.
[16] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[17] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[18] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[19] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[20] Yizheng Chen,et al. MixTrain: Scalable Training of Verifiably Robust Neural Networks , 2018, 1811.02625.
[21] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[22] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[23] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Luca Daniel,et al. Towards Verifying Robustness of Neural Networks Against Semantic Perturbations , 2019, ArXiv.
[25] Suman Jana,et al. Certified Robustness to Adversarial Examples with Differential Privacy , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[26] Philip H. S. Torr,et al. On the Robustness of Semantic Segmentation Models to Adversarial Attacks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[27] Bo Li,et al. Big but Imperceptible Adversarial Perturbations via Semantic Manipulation , 2019, ArXiv.
[28] Atul Prakash,et al. Can Attention Masks Improve Adversarial Robustness? , 2019, ArXiv.
[29] Ali Borji,et al. Salient Object Detection: A Benchmark , 2015, IEEE Transactions on Image Processing.
[30] Ali Farhadi,et al. YOLOv3: An Incremental Improvement , 2018, ArXiv.
[31] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[32] Nicolas Pugeault,et al. Salient Region Segmentation , 2018, ArXiv.
[33] Alan L. Yuille,et al. Adversarial Examples for Semantic Segmentation and Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[34] Trevor Darrell,et al. Fully Convolutional Networks for Semantic Segmentation , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.