暂无分享,去创建一个
Nan Duan | Duyu Tang | Zhihao Fan | Daxin Jiang | Wanjun Zhong | Ming Zhou | Zhongyu Wei | Siyuan Wang | Ming Zhou | Duyu Tang | Nan Duan | Daxin Jiang | Zhongyu Wei | Zhihao Fan | Siyuan Wang | Wanjun Zhong
[1] Stella X. Yu,et al. Unsupervised Feature Learning via Non-parametric Instance Discrimination , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[2] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[3] Christopher Potts,et al. A large annotated corpus for learning natural language inference , 2015, EMNLP.
[4] Jiashi Feng,et al. ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning , 2020, ICLR.
[5] Yue Zhang,et al. Natural Language Inference in Context - Investigating Contextual Reasoning over Long Texts , 2020, AAAI.
[6] John McCarthy,et al. Artificial Intelligence, Logic and Formalizing Common Sense , 1989 .
[7] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[8] Mark Hopkins,et al. Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples , 2018, ACL.
[9] Thomas Wolf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[10] Geoffrey E. Hinton,et al. A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.
[11] Nils J. Nilsson,et al. Logic and Artificial Intelligence , 1991, Artif. Intell..
[12] Samuel R. Bowman,et al. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference , 2017, NAACL.
[13] Benno Stein,et al. The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit Warrants , 2017, NAACL.
[14] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[15] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[16] Ido Dagan,et al. The Third PASCAL Recognizing Textual Entailment Challenge , 2007, ACL-PASCAL@ACL.
[17] Frank Hutter,et al. Fixing Weight Decay Regularization in Adam , 2017, ArXiv.
[18] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[19] Elizabeth M. Rudnick,et al. Static logic implication with application to redundancy identification , 1997, Proceedings. 15th IEEE VLSI Test Symposium (Cat. No.97TB100125).
[20] Alan F. Smeaton,et al. Contrastive Representation Learning: A Framework and Review , 2020, IEEE Access.
[21] Peter Clark,et al. SciTaiL: A Textual Entailment Dataset from Science Question Answering , 2018, AAAI.
[22] Guokun Lai,et al. RACE: Large-scale ReAding Comprehension Dataset From Examinations , 2017, EMNLP.
[23] Kevin Gimpel,et al. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , 2019, ICLR.
[24] Peter Norvig,et al. Artificial Intelligence: A Modern Approach , 1995 .
[25] Hanmeng Liu,et al. LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning , 2020, IJCAI.
[26] Kaiming He,et al. Momentum Contrast for Unsupervised Visual Representation Learning , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).