Effect of Adversarial Examples on the Robustness of CAPTCHA
暂无分享,去创建一个
Yang Zhang | Xin Zhou | Haichang Gao | Ge Pei | Shuai Kang | Haichang Gao | Ge Pei | Yang Zhang | Shuai Kang | Xin Zhou
[1] John C. Mitchell,et al. Text-based CAPTCHA strengths and weaknesses , 2011, CCS '11.
[2] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[3] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[4] Julio Hernandez-Castro,et al. No Bot Expects the DeepCAPTCHA! Introducing Immutable Adversarial Examples, With Applications to CAPTCHA Generation , 2017, IEEE Transactions on Information Forensics and Security.
[5] John Langford,et al. CAPTCHA: Using Hard AI Problems for Security , 2003, EUROCRYPT.
[6] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[7] Jeff Yan,et al. A low-cost attack on a Microsoft captcha , 2008, CCS.
[8] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[10] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[11] Sergey Ioffe,et al. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.
[12] Ping Zhang,et al. A Simple Generic Attack on Text Captchas , 2016, NDSS.
[13] Kouichi Sakurai,et al. One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.
[14] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[15] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Ian S. Fischer,et al. Adversarial Transformation Networks: Learning to Generate Adversarial Examples , 2017, ArXiv.
[17] Lei Lei,et al. Robustness of text-based completely automated public turing test to tell computers and humans apart , 2016, IET Inf. Secur..
[18] Dawn Xiaodong Song,et al. Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong , 2017, ArXiv.
[19] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[20] John Langford,et al. Telling humans and computers apart automatically , 2004, CACM.
[21] J. Doug Tygar,et al. Image Recognition CAPTCHAs , 2004, ISC.
[22] Rich Gossweiler,et al. WWW 2009 MADRID! Track: User Interfaces and Mobile Web / Session: User Interfaces What’s Up CAPTCHA? A CAPTCHA Based on Image Orientation , 2022 .
[23] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Yi Liu,et al. Research on the Security of Microsoft’s Two-Layer Captcha , 2017, IEEE Transactions on Information Forensics and Security.
[25] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Kaiming He,et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[28] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[29] Eduardo Valle,et al. Exploring the space of adversarial images , 2015, 2016 International Joint Conference on Neural Networks (IJCNN).
[30] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[31] David A. Forsyth,et al. NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles , 2017, ArXiv.
[32] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[33] Surya Ganguli,et al. Biologically inspired protection of deep networks from adversarial attacks , 2017, ArXiv.
[34] Dawn Song,et al. Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.
[35] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[36] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[37] Rama Chellappa,et al. UPSET and ANGRI : Breaking High Performance Image Classifiers , 2017, ArXiv.
[38] Alan L. Yuille,et al. Adversarial Examples for Semantic Segmentation and Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).