On the Detection of Adversarial Attacks against Deep Neural Networks
暂无分享,去创建一个
[1] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[2] Luc Van Gool,et al. Multi-view traffic sign detection, recognition, and 3D localisation , 2014, Machine Vision and Applications.
[3] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[4] Quanyan Zhu,et al. A game-theoretic defense against data poisoning attacks in distributed support vector machines , 2017, 2017 IEEE 56th Annual Conference on Decision and Control (CDC).
[5] Rui Zhang,et al. Secure and resilient distributed machine learning under adversarial environments , 2015, 2015 18th International Conference on Information Fusion (Fusion).
[6] Rui Zhang,et al. A game-theoretic analysis of label flipping attacks on distributed support vector machines , 2017, 2017 51st Annual Conference on Information Sciences and Systems (CISS).
[7] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[8] Burr Settles,et al. Active Learning Literature Survey , 2009 .
[9] Heike Freud,et al. On Line Learning In Neural Networks , 2016 .
[10] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[11] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[12] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[13] Eldad Haber,et al. Stable architectures for deep neural networks , 2017, ArXiv.