Understanding Dataset Design Choices for Multi-hop Reasoning
暂无分享,去创建一个
[1] Yoshua Bengio,et al. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering , 2018, EMNLP.
[2] Yue Zhang,et al. Exploring Graph-structured Passage Representation for Multi-hop Reading Comprehension with Graph Neural Networks , 2018, ArXiv.
[3] Nicola De Cao,et al. Question Answering by Reasoning Across Documents with Graph Convolutional Networks , 2018 .
[4] Kentaro Inui,et al. What Makes Reading Comprehension Questions Easier? , 2018, EMNLP.
[5] Peter Clark,et al. Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering , 2018, EMNLP.
[6] Zachary C. Lipton,et al. How Much Reading Does Reading Comprehension Require? A Critical Investigation of Popular Benchmarks , 2018, EMNLP.
[7] Dan Roth,et al. Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences , 2018, NAACL.
[8] Peter Jansen,et al. Multi-hop Inference for Sentence-level TextGraphs: How Challenging is Meaningfully Combining Information for Science Question Answering? , 2018, TextGraphs@NAACL-HLT.
[9] Ruslan Salakhutdinov,et al. Neural Models for Reasoning over Multiple Mentions Using Coreference , 2018, NAACL.
[10] Quoc V. Le,et al. QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension , 2018, ICLR.
[11] Clayton T. Morrison,et al. WorldTree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-hop Inference , 2018, LREC.
[12] Christopher Clark,et al. Simple and Effective Multi-Paragraph Reading Comprehension , 2017, ACL.
[13] Sebastian Riedel,et al. Constructing Datasets for Multi-hop Reading Comprehension Across Documents , 2017, TACL.
[14] Dirk Weissenborn,et al. Making Neural QA as Simple as Possible but not Simpler , 2017, CoNLL.
[15] Ali Farhadi,et al. Bidirectional Attention Flow for Machine Comprehension , 2016, ICLR.
[16] Yelong Shen,et al. ReasoNet: Learning to Stop Reading in Machine Comprehension , 2016, CoCo@NIPS.
[17] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[18] Jason Weston,et al. Key-Value Memory Networks for Directly Reading Documents , 2016, EMNLP.
[19] Danqi Chen,et al. A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task , 2016, ACL.
[20] Jason Weston,et al. The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations , 2015, ICLR.
[21] Richard Socher,et al. Ask Me Anything: Dynamic Memory Networks for Natural Language Processing , 2015, ICML.
[22] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[23] Daniel Jurafsky,et al. A Hierarchical Neural Autoencoder for Paragraphs and Documents , 2015, ACL.
[24] Jason Weston,et al. Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks , 2015, ICLR.
[25] Yoshua Bengio,et al. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling , 2014, ArXiv.
[26] Matthew Richardson,et al. MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text , 2013, EMNLP.