暂无分享,去创建一个
Nan Duan | Ming Gong | Xiaojun Quan | Duyu Tang | Daya Guo | Qinliang Su | Linjun Shou | Daxin Jiang | Wanjun Zhong | Zenan Xu | Qinliang Su | Duyu Tang | Nan Duan | Daxin Jiang | Xiaojun Quan | Linjun Shou | Ming Gong | Daya Guo | Wanjun Zhong | Zenan Xu
[1] Jonathan Berant,et al. oLMpics-On What Language Model Pre-training Captures , 2019, Transactions of the Association for Computational Linguistics.
[2] Peng Qi,et al. Do Syntax Trees Help Pre-trained Transformers Extract Information? , 2020, ArXiv.
[3] Ali Farhadi,et al. Defending Against Neural Fake News , 2019, NeurIPS.
[4] Wenhan Xiong,et al. Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model , 2019, ICLR.
[5] Maosong Sun,et al. ERNIE: Enhanced Language Representation with Informative Entities , 2019, ACL.
[6] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[7] Thomas Wolf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[8] Yejin Choi,et al. Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning , 2019, EMNLP.
[9] Christopher D. Manning,et al. A Structural Probe for Finding Syntax in Word Representations , 2019, NAACL.
[10] Jeffrey Ling,et al. Matching the Blanks: Distributional Similarity for Relation Learning , 2019, ACL.
[11] Roy Schwartz,et al. Knowledge Enhanced Contextual Word Representations , 2019, EMNLP/IJCNLP.
[12] Danqi Chen,et al. Position-aware Attention and Supervised Data Improve Slot Filling , 2017, EMNLP.
[13] Christopher D. Manning,et al. Graph Convolution over Pruned Dependency Trees Improves Relation Extraction , 2018, EMNLP.
[14] Shafiq R. Joty,et al. Tree-structured Attention with Hierarchical Accumulation , 2020, ICLR.
[15] Ali Farhadi,et al. Bidirectional Attention Flow for Machine Comprehension , 2016, ICLR.
[16] Jannis Bulian,et al. Ask the Right Questions: Active Question Reformulation with Reinforcement Learning , 2017, ICLR.
[17] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[18] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[19] William W. Cohen,et al. Quasar: Datasets for Question Answering by Search and Reading , 2017, ArXiv.
[20] Kentaro Inui,et al. An Attentive Neural Architecture for Fine-grained Entity Type Classification , 2016, AKBC@NAACL-HLT.
[21] Xuanjing Huang,et al. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters , 2020, FINDINGS.
[22] Kyunghyun Cho,et al. SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine , 2017, ArXiv.
[23] Ming-Wei Chang,et al. REALM: Retrieval-Augmented Language Model Pre-Training , 2020, ICML.
[24] Tianyu Gao,et al. KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation , 2019, ArXiv.
[25] Zhiyuan Liu,et al. Denoising Distantly Supervised Open-Domain Question Answering , 2018, ACL.
[26] Wei Zhang,et al. Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering , 2017, ICLR.
[27] Hung-Yi Lee,et al. Tree Transformer: Integrating Tree Structures into Self-Attention , 2019, EMNLP/IJCNLP.
[28] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[29] Zhuosheng Zhang,et al. SG-Net: Syntax-Guided Machine Reading Comprehension , 2019, AAAI.
[30] Daniel S. Weld,et al. Design Challenges for Entity Linking , 2015, TACL.
[31] Christopher D. Manning,et al. Stanza: A Python Natural Language Processing Toolkit for Many Human Languages , 2020, ACL.
[32] Ming Zhou,et al. Gated Self-Matching Networks for Reading Comprehension and Question Answering , 2017, ACL.
[33] Omer Levy,et al. Ultra-Fine Entity Typing , 2018, ACL.
[34] Wei Zhang,et al. R3: Reinforced Reader-Ranker for Open-Domain Question Answering , 2017, ArXiv.