暂无分享,去创建一个
Zhihong Tian | Zhaoquan Gu | Le Wang | Bin Zhu | Zhihong Tian | Bin Zhu | Zhaoquan Gu | Le Wang
[1] Quan Z. Sheng,et al. Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey , 2019 .
[2] Wanxiang Che,et al. Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency , 2019, ACL.
[3] Ananthram Swami,et al. Crafting adversarial input sequences for recurrent neural networks , 2016, MILCOM 2016 - 2016 IEEE Military Communications Conference.
[4] Mani B. Srivastava,et al. Generating Natural Language Adversarial Examples , 2018, EMNLP.
[5] Sameep Mehta,et al. Towards Crafting Text Adversarial Samples , 2017, ArXiv.
[6] Yoon Kim,et al. Convolutional Neural Networks for Sentence Classification , 2014, EMNLP.
[7] Yu Wang,et al. A Comprehensive Survey of Grammar Error Correction , 2020, ArXiv.
[8] Prashanth Vijayaraghavan,et al. Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model , 2019, ECML/PKDD.
[9] Sameer Singh,et al. Generating Natural Adversarial Examples , 2017, ICLR.
[10] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[11] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[12] Bennett Kleinberg,et al. Frequency-Guided Word Substitutions for Detecting Textual Adversarial Examples , 2020, EACL.
[13] James Bailey,et al. Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality , 2018, ICLR.
[14] Yu Cheng,et al. FreeLB: Enhanced Adversarial Training for Natural Language Understanding , 2020, ICLR.
[15] Bin Dong,et al. You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle , 2019, NeurIPS.
[16] John X. Morris,et al. TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP , 2020, EMNLP.
[17] Hiroyuki Shindo,et al. Interpretable Adversarial Perturbation in Input Embedding Space for Text , 2018, IJCAI.
[18] Ting Wang,et al. TextBugger: Generating Adversarial Text Against Real-world Applications , 2018, NDSS.
[19] Matt Post,et al. Grammatical Error Correction with Neural Reinforcement Learning , 2017, IJCNLP.
[20] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[21] Andrew M. Dai,et al. Adversarial Training Methods for Semi-Supervised Text Classification , 2016, ICLR.
[22] Jason Baldridge,et al. PAWS: Paraphrase Adversaries from Word Scrambling , 2019, NAACL.
[23] Peter Szolovits,et al. Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment , 2020, AAAI.
[24] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[25] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[26] Atul Prakash,et al. Robust Physical-World Attacks on Machine Learning Models , 2017, ArXiv.
[27] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[28] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[29] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[30] Xin Li,et al. Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[31] Hong Liu,et al. Towards Robustness Against Natural Language Word Substitutions , 2021, ICLR.
[32] Qian Chen,et al. T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack , 2020, EMNLP.
[33] Yanjun Qi,et al. Black-Box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers , 2018, 2018 IEEE Security and Privacy Workshops (SPW).
[34] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[35] Jinfeng Li,et al. TextShield: Robust Text Classification Based on Multimodal Embedding and Neural Machine Translation , 2020, USENIX Security Symposium.
[36] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[37] Robert L. Mercer,et al. Context based spelling correction , 1991, Inf. Process. Manag..
[38] Kai-Wei Chang,et al. Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text Classification , 2019, EMNLP.
[39] Haoran Huang,et al. Spelling Error Correction with Soft-Masked BERT , 2020, ACL.
[40] Larry S. Davis,et al. Adversarial Training for Free! , 2019, NeurIPS.
[41] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[42] Luke S. Zettlemoyer,et al. Adversarial Example Generation with Syntactically Controlled Paraphrase Networks , 2018, NAACL.
[43] Jianfeng Gao,et al. SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization , 2019, ACL.
[44] Dejing Dou,et al. HotFlip: White-Box Adversarial Examples for Text Classification , 2017, ACL.
[45] Christopher Potts,et al. Learning Word Vectors for Sentiment Analysis , 2011, ACL.
[46] Kang Li,et al. CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation , 2020, EMNLP.
[47] Xipeng Qiu,et al. BERT-ATTACK: Adversarial Attack against BERT Using BERT , 2020, EMNLP.
[48] Zhiyuan Liu,et al. Word-level Textual Adversarial Attacking as Combinatorial Optimization , 2019, ACL.