R-Trans: RNN Transformer Network for Chinese Machine Reading Comprehension
暂无分享,去创建一个
Hui Wang | Xin Zhang | Shanshan Liu | Sheng Zhang | Hui Wang | Sheng Zhang | Shanshan Liu | Xin Zhang
[1] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[2] Matthew Richardson,et al. MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text , 2013, EMNLP.
[3] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[4] Yoshua Bengio,et al. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.
[5] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[6] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[7] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[8] Eric Nichols,et al. Named Entity Recognition with Bidirectional LSTM-CNNs , 2015, TACL.
[9] Joelle Pineau,et al. Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models , 2015, AAAI.
[10] Jason Weston,et al. The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations , 2015, ICLR.
[11] Jianfeng Gao,et al. A Human Generated MAchine Reading COmprehension Dataset , 2018 .
[12] Yelong Shen,et al. ReasoNet: Learning to Stop Reading in Machine Comprehension , 2016, CoCo@NIPS.
[13] Veselin Stoyanov,et al. Evaluation Measures for the SemEval-2016 Task 4 “Sentiment Analysis in Twitter” (Draft: Version 1.13) , 2016 .
[14] Zhiguo Wang,et al. Multi-Perspective Context Matching for Machine Comprehension , 2016, ArXiv.
[15] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[16] Ali Farhadi,et al. Bidirectional Attention Flow for Machine Comprehension , 2016, ICLR.
[17] Ruslan Salakhutdinov,et al. A Comparative Study of Word Embeddings for Reading Comprehension , 2017, ArXiv.
[18] Philip Bachman,et al. NewsQA: A Machine Comprehension Dataset , 2016, Rep4NLP@ACL.
[19] Ming Zhou,et al. S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension , 2017, AAAI 2017.
[20] Guokun Lai,et al. RACE: Large-scale ReAding Comprehension Dataset From Examinations , 2017, EMNLP.
[21] Ming Zhou,et al. Gated Self-Matching Networks for Reading Comprehension and Question Answering , 2017, ACL.
[22] Dirk Weissenborn,et al. FastQA: A Simple and Efficient Neural Architecture for Question Answering , 2017, ArXiv.
[23] Richard Socher,et al. Dynamic Coattention Networks For Question Answering , 2016, ICLR.
[24] Shuohang Wang,et al. Machine Comprehension Using Match-LSTM and Answer Pointer , 2016, ICLR.
[25] Eunsol Choi,et al. QuAC: Question Answering in Context , 2018, EMNLP.
[26] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[27] Xiaodong Liu,et al. Stochastic Answer Networks for Machine Reading Comprehension , 2017, ACL.
[28] Oren Etzioni,et al. Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge , 2018, ArXiv.
[29] Quoc V. Le,et al. QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension , 2018, ICLR.
[30] Xinyan Xiao,et al. DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world Applications , 2017, QA@ACL.
[31] Maosong Sun,et al. A Multi-answer Multi-task Framework for Real-world Machine Reading Comprehension , 2018, EMNLP.
[32] Chris Dyer,et al. The NarrativeQA Reading Comprehension Challenge , 2017, TACL.
[33] Ming Zhou,et al. Reinforced Mnemonic Reader for Machine Reading Comprehension , 2017, IJCAI.
[34] Jun Xu,et al. Hierarchical Answer Selection Framework for Multi-passage Machine Reading Comprehension , 2018, CCIR.
[35] Guokun Lai,et al. Large-scale Cloze Test Dataset Created by Teachers , 2017, EMNLP.
[36] Luo Si,et al. A Deep Cascade Model for Multi-Document Reading Comprehension , 2018, AAAI.
[37] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.