暂无分享,去创建一个
Richard Socher | Caiming Xiong | Victor Zhong | Sewon Min | R. Socher | Caiming Xiong | Sewon Min | Victor Zhong
[1] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[2] Matthew Richardson,et al. MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text , 2013, EMNLP.
[3] Andrew Chou,et al. Semantic Parsing on Freebase from Question-Answer Pairs , 2013, EMNLP.
[4] Mihai Surdeanu,et al. The Stanford CoreNLP Natural Language Processing Toolkit , 2014, ACL.
[5] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[6] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[7] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[8] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[9] Yi Yang,et al. WikiQA: A Challenge Dataset for Open-Domain Question Answering , 2015, EMNLP.
[10] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[11] Bowen Zhou,et al. ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs , 2015, TACL.
[12] Kenton Lee,et al. Learning Recurrent Span Representations for Extractive Question Answering , 2016, ArXiv.
[13] Jason Weston,et al. Key-Value Memory Networks for Directly Reading Documents , 2016, EMNLP.
[14] Yelong Shen,et al. ReasoNet: Learning to Stop Reading in Machine Comprehension , 2016, CoCo@NIPS.
[15] Jason Weston,et al. Reading Wikipedia to Answer Open-Domain Questions , 2017, ACL.
[16] Deng Cai,et al. MEMEN: Multi-layer Embedding with Memory Networks for Machine Comprehension , 2017, ArXiv.
[17] Ali Farhadi,et al. Bidirectional Attention Flow for Machine Comprehension , 2016, ICLR.
[18] Kyunghyun Cho,et al. SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine , 2017, ArXiv.
[19] Philip Bachman,et al. NewsQA: A Machine Comprehension Dataset , 2016, Rep4NLP@ACL.
[20] Dirk Weissenborn,et al. Making Neural QA as Simple as Possible but not Simpler , 2017, CoNLL.
[21] Eunsol Choi,et al. Coarse-to-Fine Question Answering for Long Documents , 2016, ACL.
[22] Yuxing Peng,et al. Mnemonic Reader for Machine Comprehension , 2017, ArXiv.
[23] William W. Cohen,et al. Quasar: Datasets for Question Answering by Search and Reading , 2017, ArXiv.
[24] Guokun Lai,et al. RACE: Large-scale ReAding Comprehension Dataset From Examinations , 2017, EMNLP.
[25] Dirk Weissenborn,et al. Reading Twice for Natural Language Understanding , 2017, ArXiv.
[26] Guokun Lai,et al. Large-scale Cloze Test Dataset Designed by Teachers , 2018, ArXiv.
[27] Yoshimasa Tsuruoka,et al. A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks , 2016, EMNLP.
[28] Hannaneh Hajishirzi,et al. Question Answering through Transfer Learning from Large Fine-grained Supervision Data , 2017, ACL.
[29] Richard Socher,et al. Learned in Translation: Contextualized Word Vectors , 2017, NIPS.
[30] John Miller,et al. Globally Normalized Reader , 2017, EMNLP.
[31] Wei Zhang,et al. R3: Reinforced Reader-Ranker for Open-Domain Question Answering , 2017, ArXiv.
[32] Eunsol Choi,et al. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension , 2017, ACL.
[33] Percy Liang,et al. Adversarial Examples for Evaluating Reading Comprehension Systems , 2017, EMNLP.
[34] Richard Socher,et al. DCN+: Mixed Objective and Deep Residual Coattention for Question Answering , 2017, ICLR.
[35] Chris Dyer,et al. Dynamic Integration of Background Knowledge in Neural NLU Systems , 2017, 1706.02596.
[36] Christopher Clark,et al. Simple and Effective Multi-Paragraph Reading Comprehension , 2017, ACL.
[37] Ankur P. Parikh,et al. Multi-Mention Learning for Reading Comprehension with Neural Cascades , 2017, ICLR.
[38] Yelong Shen,et al. FusionNet: Fusing via Fully-Aware Attention with Application to Machine Comprehension , 2017, ICLR.
[39] Chris Dyer,et al. The NarrativeQA Reading Comprehension Challenge , 2017, TACL.
[40] Ming Zhou,et al. Reinforced Mnemonic Reader for Machine Reading Comprehension , 2017, IJCAI.
[41] Nan Yang,et al. Context-Aware Answer Sentence Selection With Hierarchical Gated Recurrent Neural Networks , 2018, IEEE/ACM Transactions on Audio, Speech, and Language Processing.