暂无分享,去创建一个
Yuan He | Xiaodan Li | Yuefeng Chen | Hui Xue | Hui Xue | Yuan He | Xiaodan Li | YueFeng Chen
[1] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[2] Cyrus Rashtchian,et al. Robustness for Non-Parametric Classification: A Generic Attack and Defense , 2020, AISTATS.
[3] Wei Liu,et al. SSD: Single Shot MultiBox Detector , 2015, ECCV.
[4] David A. Wagner,et al. On the Robustness of Deep K-Nearest Neighbors , 2019, 2019 IEEE Security and Privacy Workshops (SPW).
[5] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[6] Patrick D. McDaniel,et al. Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning , 2018, ArXiv.
[7] Huichen Lihuichen. DECISION-BASED ADVERSARIAL ATTACKS: RELIABLE ATTACKS AGAINST BLACK-BOX MACHINE LEARNING MODELS , 2017 .
[8] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[9] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[10] Leif E. Peterson. K-nearest neighbor , 2009, Scholarpedia.
[11] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[12] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Chawin Sitawarin,et al. Defending Against Adversarial Examples with K-Nearest Neighbor , 2019, ArXiv.
[14] Abhimanyu Dubey,et al. Defense Against Adversarial Images Using Web-Scale Nearest-Neighbor Search , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[15] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[16] Jun Zhu,et al. Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[17] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[18] Moustapha Cissé,et al. Countering Adversarial Images using Input Transformations , 2018, ICLR.
[19] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Kaiming He,et al. Mask R-CNN , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[21] Brijesh B. Mehta,et al. Face Recognition Methods & Applications , 2014, ArXiv.
[22] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[23] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[24] Vineeth N. Balasubramanian,et al. Harnessing the Vulnerability of Latent Layers in Adversarially Trained Models , 2019, IJCAI.
[25] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[26] Xiaoyu Cao,et al. Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification , 2017, ACSAC.
[27] Jun Zhu,et al. Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[28] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.