Black-Box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers
暂无分享,去创建一个
Yanjun Qi | Mary Lou Soffa | Ji Gao | Jack Lanchantin | Yanjun Qi | M. Soffa | Ji Gao | Jack Lanchantin
[1] Vladimir I. Levenshtein,et al. Binary codes capable of correcting deletions, insertions, and reversals , 1965 .
[2] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[3] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[4] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[5] Simon Haykin,et al. GradientBased Learning Applied to Document Recognition , 2001 .
[6] Pedro M. Domingos,et al. Adversarial classification , 2004, KDD.
[7] Christopher Meek,et al. Good Word Attacks on Statistical Spam Filters , 2005, CEAS.
[8] Christopher Meek,et al. Adversarial learning , 2005, KDD '05.
[9] Blaine Nelson,et al. Can machine learning be secure? , 2006, ASIACCS '06.
[10] Vangelis Metsis,et al. Spam Filtering with Naive Bayes - Which Naive Bayes? , 2006, CEAS.
[11] Jason Weston,et al. A unified architecture for natural language processing: deep neural networks with multitask learning , 2008, ICML '08.
[12] Christopher Potts,et al. Learning Word Vectors for Sentiment Analysis , 2011, ACL.
[13] Andrew Y. Ng,et al. Parsing Natural Scenes and Natural Language with Recursive Neural Networks , 2011, ICML.
[14] Jeffrey Dean,et al. Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.
[15] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[16] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[17] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[18] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[19] Quoc V. Le,et al. Semi-supervised Sequence Learning , 2015, NIPS.
[20] Xiang Zhang,et al. Character-level Convolutional Networks for Text Classification , 2015, NIPS.
[21] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[22] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[23] Ananthram Swami,et al. Crafting adversarial input sequences for recurrent neural networks , 2016, MILCOM 2016 - 2016 IEEE Military Communications Conference.
[24] Makoto Miwa,et al. End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures , 2016, ACL.
[25] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[26] Ananthram Swami,et al. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples , 2016, ArXiv.
[27] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[28] Patrick D. McDaniel,et al. Cleverhans V0.1: an Adversarial Machine Learning Library , 2016, ArXiv.
[29] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[30] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[31] Sameep Mehta,et al. Towards Crafting Text Adversarial Samples , 2017, ArXiv.
[32] Xirong Li,et al. Deep Text Classification Can be Fooled , 2017, IJCAI.