A Closer Look into the Robustness of Neural Dependency Parsers Using Better Adversarial Examples
暂无分享,去创建一个
Wanxiang Che | Ting Liu | Shay B. Cohen | Zhilin Lei | Ivan Titov | Yuxuan Wang | Wanxiang Che | Yuxuan Wang | Ting Liu | Ivan Titov | Zhilin Lei
[1] Ivan Titov,et al. Obfuscation for Privacy-preserving Syntactic Parsing , 2019, IWPT.
[2] Jingzhou Liu,et al. Stack-Pointer Networks for Dependency Parsing , 2018, ACL.
[3] Christopher D. Manning,et al. Generating Typed Dependency Parses from Phrase Structure Parses , 2006, LREC.
[4] Xinlei Chen,et al. Visualizing and Understanding Neural Models in NLP , 2015, NAACL.
[5] Timothy Dozat,et al. Deep Biaffine Attention for Neural Dependency Parsing , 2016, ICLR.
[6] Peter Szolovits,et al. Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment , 2020, AAAI.
[7] Quoc V. Le,et al. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators , 2020, ICLR.
[8] John X. Morris,et al. Reevaluating Adversarial Examples in Natural Language , 2020, FINDINGS.
[9] Wanxiang Che,et al. Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency , 2019, ACL.
[10] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[11] Li Zhao,et al. Attention-based LSTM for Aspect-level Sentiment Classification , 2016, EMNLP.
[12] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[13] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[14] David Vandyke,et al. Counter-fitting Word Vectors to Linguistic Constraints , 2016, NAACL.
[15] Quan Z. Sheng,et al. Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey , 2019 .
[16] Kewei Tu,et al. Adversarial Attack and Defense of Structured Prediction Models , 2020, EMNLP.
[17] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[18] Mani B. Srivastava,et al. Generating Natural Language Adversarial Examples , 2018, EMNLP.
[19] Zhiyuan Liu,et al. Word-level Textual Adversarial Attacking as Combinatorial Optimization , 2019, ACL.
[20] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[21] Qiang Dong,et al. Hownet And The Computation Of Meaning , 2006 .
[22] Lei Li,et al. Generating Fluent Adversarial Examples for Natural Languages , 2019, ACL.
[23] Cho-Jui Hsieh,et al. Evaluating and Enhancing the Robustness of Neural Network-based Dependency Parsing Models with Adversarial Examples , 2020, ACL.
[24] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.