Diversify Your Datasets: Analyzing Generalization via Controlled Variance in Adversarial Datasets
暂无分享,去创建一个
Ido Dagan | Roee Aharoni | Vered Shwartz | Ohad Rozen | Ido Dagan | Vered Shwartz | Roee Aharoni | Ohad Rozen
[1] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[2] Noah A. Smith,et al. Tree Edit Models for Recognizing Textual Entailments, Paraphrases, and Answers to Questions , 2010, NAACL.
[3] Christopher D. Manning,et al. A Phrase-Based Alignment Model for Natural Language Inference , 2008, EMNLP.
[4] Omer Levy,et al. Annotation Artifacts in Natural Language Inference Data , 2018, NAACL.
[5] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[6] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[7] Ido Dagan,et al. Still a Pain in the Neck: Evaluating Text Representations on Lexical Composition , 2019, TACL.
[8] Masatoshi Tsuchiya,et al. Performance Impact Caused by Hidden Bias of Training Data for Recognizing Textual Entailment , 2018, LREC.
[9] Eliyahu Kiperwasser,et al. Scheduled Multi-Task Learning: From Syntax to Translation , 2018, TACL.
[10] Christopher Potts,et al. A large annotated corpus for learning natural language inference , 2015, EMNLP.
[11] Ido Dagan,et al. Recognizing Textual Entailment: Models and Applications , 2013, Recognizing Textual Entailment: Models and Applications.
[12] Rachel Rudinger,et al. Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation , 2018, BlackboxNLP@EMNLP.
[13] Samuel R. Bowman,et al. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference , 2017, NAACL.
[14] Ido Dagan,et al. Efficient Search for Transformation-based Inference , 2012, ACL.
[15] Roy Schwartz,et al. Inoculation by Fine-Tuning: A Method for Analyzing Challenge Datasets , 2019, NAACL.
[16] Ido Dagan,et al. Knowledge-Based Textual Inference via Parse-Tree Transformations , 2015, J. Artif. Intell. Res..
[17] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[18] Christopher D. Manning,et al. Modeling Semantic Containment and Exclusion in Natural Language Inference , 2008, COLING.
[19] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[20] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[21] Ido Dagan,et al. BIUTEE: A Modular Open-Source System for Recognizing Textual Entailment , 2012, ACL.
[22] Mihai Surdeanu,et al. The Stanford CoreNLP Natural Language Processing Toolkit , 2014, ACL.
[23] Yoav Goldberg,et al. Breaking NLI Systems with Sentences that Require Simple Lexical Inferences , 2018, ACL.
[24] Carolyn Penstein Rosé,et al. Stress Test Evaluation for Natural Language Inference , 2018, COLING.
[25] Alex Wang,et al. What do you learn from context? Probing for sentence structure in contextualized word representations , 2019, ICLR.
[26] Xiaodong Liu,et al. Multi-Task Deep Neural Networks for Natural Language Understanding , 2019, ACL.