暂无分享,去创建一个
Moshe Tennenholtz | Yoav Shoham | Opher Lieber | Kevin Leyton-Brown | Omri Abend | Yoav Levine | Barak Lenz | Moshe Tennenholtz | Y. Shoham | Kevin Leyton-Brown | Yoav Levine | Omri Abend | Opher Lieber | Barak Lenz
[1] Samuel R. Bowman,et al. Neural Network Acceptability Judgments , 2018, Transactions of the Association for Computational Linguistics.
[2] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[3] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[4] Omer Levy,et al. SpanBERT: Improving Pre-training by Representing and Predicting Spans , 2019, TACL.
[5] Samuel R. Bowman,et al. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference , 2017, NAACL.
[6] Yu Sun,et al. ERNIE: Enhanced Representation through Knowledge Integration , 2019, ArXiv.
[7] Willem H. Zuidema. What are the Productive Units of Natural Language Grammar? A DOP Approach to the Automatic Identification of Constructions. , 2006, CoNLL.
[8] Percy Liang,et al. Know What You Don’t Know: Unanswerable Questions for SQuAD , 2018, ACL.
[9] Hang Li,et al. AMBERT: A Pre-trained Language Model with Multi-Grained Tokenization , 2020, ArXiv.
[10] Ido Dagan,et al. The Third PASCAL Recognizing Textual Entailment Challenge , 2007, ACL-PASCAL@ACL.
[11] G. A. Barnard,et al. Transmission of Information: A Statistical Theory of Communications. , 1961 .
[12] Roy Bar-Haim,et al. The Second PASCAL Recognising Textual Entailment Challenge , 2006 .
[13] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[14] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[15] Chris Brockett,et al. Automatically Constructing a Corpus of Sentential Paraphrases , 2005, IJCNLP.
[16] Doug Downey,et al. Locating Complex Named Entities in Web Text , 2007, IJCAI.
[17] Ido Dagan,et al. The Third PASCAL Recognizing Textual Entailment Challenge , 2007, ACL-PASCAL@ACL.
[18] Tim van de Cruys. Two Multivariate Generalizations of Pointwise Mutual Information , 2011, Proceedings of the Workshop on Distributional Semantics and Compositionality.
[19] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[20] Eneko Agirre,et al. SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation , 2017, *SEMEVAL.
[21] Ioannis Korkontzelos,et al. Reviewing and Evaluating Automatic Term Recognition Techniques , 2008, GoTAL.
[22] Guokun Lai,et al. RACE: Large-scale ReAding Comprehension Dataset From Examinations , 2017, EMNLP.
[23] Carlos Ramisch,et al. A Broad Evaluation of Techniques for Automatic Acquisition of Multiword Expressions , 2012, ACL 2012.
[24] Ming-Wei Chang,et al. REALM: Retrieval-Augmented Language Model Pre-Training , 2020, ICML.
[25] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[26] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[27] Hector J. Levesque,et al. The Winograd Schema Challenge , 2011, AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning.
[28] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[29] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.