Coarse and Fine Granularity Graph Reasoning for Interpretable Multi-Hop Question Answering

Interpretable multi-hop question answering requires step-by-step reasoning over multiple documents and finding scattered supporting facts to answer the question. Prior works have proposed the entity graph method to aggregate the entity information to improve the ability of reasoning. However, the entity graph loses some non-entity information that is also important to understand the semantics. Moreover, the entities distributed in the noisy sentences may mislead the reasoning process. In this paper, we propose the Coarse and Fine Granularity Graph Network (CFGGN), a novel interpretable model that combines both sentence information and entity information to answer the multi-hop questions. The CFGGN consists of a coarse-grain module to perform sentence-level reasoning and a fine-grain module to make entity-level reference. In sentence-level reasoning, the sentence graph is constructed to filter out the noisy sentences and capture the sentence features. In entity-level reference, a dynamic entity graph is used for the entity-level reasoning. We design a fusion module to integrate information of different granularity. To enhance the interpretability of the overall process, we calculate the reasoning score for each step and present the reasoning path from the multiple documents to the final answer. Evaluation on the HotpotQA dataset in the distractor setting shows that our method outperforms the published SOTA entity-based method in five out of six metrics.

[1]  Nicola De Cao,et al.  Question Answering by Reasoning Across Documents with Graph Convolutional Networks , 2018, NAACL.

[2]  Yue Zhang,et al.  Exploring Graph-structured Passage Representation for Multi-hop Reading Comprehension with Graph Neural Networks , 2018, ArXiv.

[3]  Percy Liang,et al.  Adversarial Examples for Evaluating Reading Comprehension Systems , 2017, EMNLP.

[4]  Sebastian Riedel,et al.  Constructing Datasets for Multi-hop Reading Comprehension Across Documents , 2017, TACL.

[5]  Zijian Wang,et al.  Answering Complex Open-domain Questions Through Iterative Query Generation , 2019, EMNLP.

[6]  Jian Zhang,et al.  SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.

[7]  Ruslan Salakhutdinov,et al.  Neural Models for Reasoning over Multiple Mentions Using Coreference , 2018, NAACL.

[8]  Yankai Lin,et al.  Multi-Paragraph Reasoning with Knowledge-enhanced Graph Neural Network , 2019, ArXiv.

[9]  Ming Zhou,et al.  Gated Self-Matching Networks for Reading Comprehension and Question Answering , 2017, ACL.

[10]  Lei Li,et al.  Dynamically Fused Graph Network for Multi-hop Reasoning , 2019, ACL.

[11]  Yue Zhang,et al.  A Graph-to-Sequence Model for AMR-to-Text Generation , 2018, ACL.

[12]  Jimmy J. Lin,et al.  End-to-End Open-Domain Question Answering with BERTserini , 2019, NAACL.

[13]  Kyunghyun Cho,et al.  SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine , 2017, ArXiv.

[14]  Chang Zhou,et al.  Cognitive Graph for Multi-Hop Reading Comprehension at Scale , 2019, ACL.

[15]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[16]  Hannaneh Hajishirzi,et al.  Multi-hop Reading Comprehension through Question Decomposition and Rescoring , 2019, ACL.

[17]  Bowen Zhou,et al.  Multi-hop Reading Comprehension across Multiple Documents by Reasoning over Heterogeneous Graphs , 2019, ACL.

[18]  Zhe Gan,et al.  Hierarchical Graph Network for Multi-hop Question Answering , 2019, EMNLP.

[19]  Wei Zhang,et al.  Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering , 2017, ICLR.

[20]  Yoshua Bengio,et al.  HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering , 2018, EMNLP.

[21]  Zhiyuan Liu,et al.  Denoising Distantly Supervised Open-Domain Question Answering , 2018, ACL.

[22]  Ali Farhadi,et al.  Bidirectional Attention Flow for Machine Comprehension , 2016, ICLR.

[23]  Mohit Bansal,et al.  Revealing the Importance of Semantic Retrieval for Machine Reading at Scale , 2019, EMNLP.

[24]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[25]  Richard Socher,et al.  Dynamic Coattention Networks For Question Answering , 2016, ICLR.

[26]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[27]  Jonathan Berant,et al.  The Web as a Knowledge-Base for Answering Complex Questions , 2018, NAACL.

[28]  Zhen Huang,et al.  Retrieve, Read, Rerank: Towards End-to-End Multi-Document Reading Comprehension , 2019, ACL.

[29]  Jason Weston,et al.  Reading Wikipedia to Answer Open-Domain Questions , 2017, ACL.

[30]  Rajarshi Das,et al.  Multi-step Entity-centric Information Retrieval for Multi-Hop Question Answering , 2019, EMNLP.

[31]  Maria Leonor Pacheco,et al.  of the Association for Computational Linguistics: , 2001 .

[32]  Pietro Liò,et al.  Graph Attention Networks , 2017, ICLR.

[33]  Xiaodong Liu,et al.  Stochastic Answer Networks for Machine Reading Comprehension , 2017, ACL.

[34]  Ran El-Yaniv,et al.  Multi-Hop Paragraph Retrieval for Open-Domain Question Answering , 2019, ACL.

[35]  Kai Liu,et al.  Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification , 2018, ACL.

[36]  Eunsol Choi,et al.  TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension , 2017, ACL.