暂无分享,去创建一个
Chuanrong Li | Xinyi Wu | Shane Steinert-Threlkeld | Xuhui Zhou | Lin Shengshuo | Leo Z. Liu | Shane Steinert-Threlkeld | Xinyi Wu | Xuhui Zhou | Leo Z. Liu | Chuanrong Li | Lin Shengshuo
[1] Christopher Potts,et al. A large annotated corpus for learning natural language inference , 2015, EMNLP.
[2] Peter Szolovits,et al. Is BERT Really Robust? Natural Language Attack on Text Classification and Entailment , 2019, ArXiv.
[3] R. Thomas McCoy,et al. Syntactic Data Augmentation Increases Robustness to Inference Heuristics , 2020, ACL.
[4] Peter Szolovits,et al. Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment , 2020, AAAI.
[5] Omer Levy,et al. Annotation Artifacts in Natural Language Inference Data , 2018, NAACL.
[6] Dan Flickinger,et al. Minimal Recursion Semantics: An Introduction , 2005 .
[7] Samuel R. Bowman,et al. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference , 2017, NAACL.
[8] Ivan A. Sag,et al. Book Reviews: Head-driven Phrase Structure Grammar and German in Head-driven Phrase-structure Grammar , 1996, CL.
[9] Beatrice Santorini,et al. Building a Large Annotated Corpus of English: The Penn Treebank , 1993, CL.
[10] Sebastian Ruder,et al. Universal Language Model Fine-tuning for Text Classification , 2018, ACL.
[11] Eduard Hovy,et al. Learning the Difference that Makes a Difference with Counterfactually-Augmented Data , 2020, ICLR.
[12] Adam Trischler,et al. How Reasonable are Common-Sense Reasoning Tasks: A Case-Study on the Winograd Schema Challenge and SWAG , 2018, EMNLP.
[13] C. Pollard,et al. Center for the Study of Language and Information , 2022 .
[14] Reut Tsarfaty,et al. Evaluating NLP Models via Contrast Sets , 2020, ArXiv.
[15] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[16] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[17] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[18] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[19] Jason Weston,et al. Build it Break it Fix it for Dialogue Safety: Robustness from Adversarial Human Attack , 2019, EMNLP.
[20] Jacob Andreas,et al. Good-Enough Compositional Data Augmentation , 2019, ACL.
[21] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[22] Yoav Goldberg,et al. Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets , 2019, EMNLP.
[23] R. Thomas McCoy,et al. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference , 2019, ACL.
[24] Yejin Choi,et al. Adversarial Filters of Dataset Biases , 2020, ICML.
[25] Marco Baroni,et al. Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks , 2017, ICML.
[26] Dan Flickinger,et al. An Open Source Grammar Development Environment and Broad-coverage English Grammar Using HPSG , 2000, LREC.
[27] R'emi Louf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.