Enhancing Pre-Trained Language Representations with Rich Knowledge for Machine Reading Comprehension
暂无分享,去创建一个
An Yang | Jing Liu | Kai Liu | Sujian Li | Yajuan Lyu | Hua Wu | Quan Wang | Qiaoqiao She | Hua Wu | Jing Liu | Sujian Li | Yajuan Lyu | Quan Wang | An Yang | Qiaoqiao She | Kai Liu
[1] Danqi Chen,et al. A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task , 2016, ACL.
[2] Quoc V. Le,et al. QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension , 2018, ICLR.
[3] Chris Dyer,et al. Dynamic Integration of Background Knowledge in Neural NLU Systems , 2017, 1706.02596.
[4] Tom M. Mitchell,et al. Leveraging Knowledge Bases in LSTMs for Improving Machine Reading , 2017, ACL.
[5] Ming Zhou,et al. Improving Question Answering by Commonsense-Based Pre-Training , 2018, NLPCC.
[6] Xiaodong Liu,et al. Stochastic Answer Networks for Machine Reading Comprehension , 2017, ACL.
[7] Shuohang Wang,et al. Machine Comprehension Using Match-LSTM and Answer Pointer , 2016, ICLR.
[8] Jianfeng Gao,et al. A Human Generated MAchine Reading COmprehension Dataset , 2018 .
[9] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[10] Ming Zhou,et al. Gated Self-Matching Networks for Reading Comprehension and Question Answering , 2017, ACL.
[11] Oren Etzioni,et al. Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge , 2018, ArXiv.
[12] Luke S. Zettlemoyer,et al. Dissecting Contextual Word Embeddings: Architecture and Representation , 2018, EMNLP.
[13] Steven Bird,et al. NLTK: The Natural Language Toolkit , 2002, ACL.
[14] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[15] George A. Miller,et al. WordNet: A Lexical Database for English , 1995, HLT.
[16] Mohit Bansal,et al. Commonsense for Generative Multi-Hop Question Answering Tasks , 2018, EMNLP.
[17] Jonathan Berant,et al. CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge , 2019, NAACL.
[18] Kyunghyun Cho,et al. SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine , 2017, ArXiv.
[19] Yoav Goldberg,et al. Assessing BERT's Syntactic Abilities , 2019, ArXiv.
[20] Ting Liu,et al. Attention-over-Attention Neural Networks for Reading Comprehension , 2016, ACL.
[21] Todor Mihaylov,et al. Knowledgeable Reader: Enhancing Cloze-Style Reading Comprehension with External Commonsense Knowledge , 2018, ACL.
[22] Simon Ostermann,et al. SemEval-2018 Task 11: Machine Comprehension Using Commonsense Knowledge , 2018, *SEMEVAL.
[23] Wei Zhao,et al. Yuanfudao at SemEval-2018 Task 11: Three-way Attention and Relational Knowledge for Commonsense Machine Comprehension , 2018, SemEval@NAACL-HLT.
[24] Mihai Surdeanu,et al. The Stanford CoreNLP Natural Language Processing Toolkit , 2014, ACL.
[25] Estevam R. Hruschka,et al. Toward an Architecture for Never-Ending Language Learning , 2010, AAAI.
[26] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[27] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[28] Christopher Clark,et al. Simple and Effective Multi-Paragraph Reading Comprehension , 2017, ACL.
[29] Heng Ji,et al. Improving Question Answering with External Knowledge , 2019, EMNLP.
[30] Ali Farhadi,et al. Bidirectional Attention Flow for Machine Comprehension , 2016, ICLR.
[31] Jianfeng Gao,et al. Embedding Entities and Relations for Learning and Inference in Knowledge Bases , 2014, ICLR.
[32] Xiaodong Liu,et al. ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension , 2018, ArXiv.
[33] Richard Socher,et al. Dynamic Coattention Networks For Question Answering , 2016, ICLR.
[34] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[35] Eunsol Choi,et al. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension , 2017, ACL.
[36] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[37] Peter Clark,et al. Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering , 2018, EMNLP.
[38] Percy Liang,et al. Know What You Don’t Know: Unanswerable Questions for SQuAD , 2018, ACL.
[39] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[40] Zhen-Hua Ling,et al. Neural Natural Language Inference Models Enhanced with External Knowledge , 2017, ACL.