It’s Morphin’ Time! Combating Linguistic Discrimination with Inflectional Perturbations
暂无分享,去创建一个
[1] Joey Tianyi Zhou,et al. Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment , 2019, AAAI.
[2] Omer Levy,et al. SpanBERT: Improving Pre-training by Representing and Predicting Spans , 2019, TACL.
[3] Quan Z. Sheng,et al. Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey , 2019 .
[4] R'emi Louf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[5] Lav R. Varshney,et al. CTRL: A Conditional Transformer Language Model for Controllable Generation , 2019, ArXiv.
[6] Ted Briscoe,et al. The BEA-2019 Shared Task on Grammatical Error Correction , 2019, BEA@ACL.
[7] Yamuna Kachru,et al. The Handbook of World Englishes , 2019 .
[8] Peter Szolovits,et al. Is BERT Really Robust? Natural Language Attack on Text Classification and Entailment , 2019, ArXiv.
[9] Shikha Bordia,et al. Identifying and Reducing Gender Bias in Word-Level Language Models , 2019, NAACL.
[10] Jason Baldridge,et al. PAWS: Paraphrase Adversaries from Word Scrambling , 2019, NAACL.
[11] Myle Ott,et al. fairseq: A Fast, Extensible Toolkit for Sequence Modeling , 2019, NAACL.
[12] Chandler May,et al. On Measuring Social Biases in Sentence Encoders , 2019, NAACL.
[13] Graham Neubig,et al. On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models , 2019, NAACL.
[14] Iryna Gurevych,et al. Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems , 2019, NAACL.
[15] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[16] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[17] Carlos Guestrin,et al. Semantically Equivalent Adversarial Rules for Debugging NLP models , 2018, ACL.
[18] Percy Liang,et al. Know What You Don’t Know: Unanswerable Questions for SQuAD , 2018, ACL.
[19] Myle Ott,et al. Scaling Neural Machine Translation , 2018, WMT.
[20] Rachel Rudinger,et al. Gender Bias in Coreference Resolution , 2018, NAACL.
[21] Matt Post,et al. A Call for Clarity in Reporting BLEU Scores , 2018, WMT.
[22] Mani B. Srivastava,et al. Generating Natural Language Adversarial Examples , 2018, EMNLP.
[23] Luke S. Zettlemoyer,et al. Adversarial Example Generation with Syntactically Controlled Paraphrase Networks , 2018, NAACL.
[24] Luke S. Zettlemoyer,et al. AllenNLP: A Deep Semantic Natural Language Processing Platform , 2018, ArXiv.
[25] Joshua B. Tenenbaum,et al. A critical period for second language acquisition: Evidence from 2/3 million English speakers , 2018, Cognition.
[26] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[27] Dejing Dou,et al. HotFlip: White-Box Adversarial Examples for Text Classification , 2017, ACL.
[28] Yonatan Belinkov,et al. Synthetic and Natural Noise Both Break Neural Machine Translation , 2017, ICLR.
[29] Percy Liang,et al. Adversarial Examples for Evaluating Reading Comprehension Systems , 2017, EMNLP.
[30] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[31] Yann Dauphin,et al. Convolutional Sequence to Sequence Learning , 2017, ICML.
[32] Rachael Tatman,et al. Gender and Dialect Bias in YouTube’s Automatic Captions , 2017, EthNLP@EACL.
[33] Ali Farhadi,et al. Bidirectional Attention Flow for Machine Comprehension , 2016, ICLR.
[34] J. Rickford,et al. Language and linguistics on trial: Hearing Rachel Jeantel (and other vernacular speakers) in the courtroom and beyond , 2016 .
[35] Dirk Hovy,et al. The Social Impact of Natural Language Processing , 2016, ACL.
[36] Adam Tauman Kalai,et al. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.
[37] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[38] Rico Sennrich,et al. Neural Machine Translation of Rare Words with Subword Units , 2015, ACL.
[39] Maja Popovic,et al. chrF: character n-gram F-score for automatic MT evaluation , 2015, WMT@EMNLP.
[40] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[41] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[42] Eneko Agirre,et al. *SEM 2013 shared task: Semantic Textual Similarity , 2013, *SEMEVAL.
[43] Ewan Klein,et al. Natural Language Processing with Python , 2009 .
[44] James Fleming,et al. English as a Global language , 1998, Crossings: A Journal of English Studies.
[45] H. Seymour. The challenge of language assessment for african american english-speaking children: a historical perspective. , 2004, Seminars in speech and language.
[46] Walt Wolfram,et al. The grammar of urban African American Vernacular English , 2004 .
[47] Lydia White,et al. Fossilization in steady state L2 grammars: Persistent problems with inflectional morphology , 2003, Bilingualism: Language and Cognition.
[48] B. Haznedar. Missing Surface Inflection in Adult and Child L2 Acquisition , 2003 .
[49] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[50] Lydia White,et al. Missing Surface Inflection or Impairment in second language acquisition? Evidence from tense and agreement , 2000 .
[51] Donna Lardiere. Case and Tense in the ‘fossilized’ steady state , 1998 .