A Graph-guided Multi-round Retrieval Method for Conversational Open-domain Question Answering

In recent years, conversational agents have provided a natural and convenient access to useful information in people’s daily life, along with a broad and new research topic, conversational question answering (QA). Among the popular conversational QA tasks, conversational open-domain QA, which requires to retrieve relevant passages from the Web to extract exact answers, is more practical but less studied. The main challenge is how to well capture and fully explore the historical context in conversation to facilitate effective large-scale retrieval. The current work mainly utilizes history questions to refine the current question or to enhance its representation, yet the relations between history answers and the current answer in a conversation, which is also critical to the task, are totally neglected. To address this problem, we propose a novel graph-guided retrieval method to model the relations among answers across conversation turns. In particular, it utilizes a passage graph derived from the hyperlink-connected passages that contains history answers and potential current answers, to retrieve more relevant passages for subsequent answer extraction. Moreover, in order to collect more complementary information in the historical context, we also propose to incorporate the multi-round relevance feedback technique to explore the impact of the retrieval context on current question understanding. Experimental results on the public dataset verify the effectiveness of our proposed method. Notably, the F1 score is improved by 5% and 11% with predicted history answers and true history answers, respectively.

[1]  Donna K. Harman,et al.  Relevance feedback revisited , 1992, SIGIR '92.

[2]  Jason Weston,et al.  Reading Wikipedia to Answer Open-Domain Questions , 2017, ACL.

[3]  Ahmed Elgohary,et al.  A dataset and baselines for sequential open-domain question answering , 2018, EMNLP.

[4]  Pietro Liò,et al.  Graph Attention Networks , 2017, ICLR.

[5]  Eunsol Choi,et al.  QuAC: Question Answering in Context , 2018, EMNLP.

[6]  Wei Zhang,et al.  Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering , 2017, ICLR.

[7]  Mitesh M. Khapra,et al.  Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph , 2018, AAAI.

[8]  Wei Zhang,et al.  R3: Reinforced Ranker-Reader for Open-Domain Question Answering , 2018, AAAI.

[9]  Ran El-Yaniv,et al.  Multi-Hop Paragraph Retrieval for Open-Domain Question Answering , 2019, ACL.

[10]  Gerhard Weikum,et al.  Answering Complex Questions by Joining Multi-Document Evidence with Quasi Knowledge Graphs , 2019, SIGIR.

[11]  Jimmy J. Lin,et al.  End-to-End Open-Domain Question Answering with BERTserini , 2019, NAACL.

[12]  W. Bruce Croft,et al.  Attentive History Selection for Conversational Question Answering , 2019, CIKM.

[13]  Eunsol Choi,et al.  CONVERSATIONAL MACHINE COMPREHENSION , 2019 .

[14]  Danqi Chen,et al.  Knowledge Guided Text Retrieval and Reading for Open Domain Question Answering , 2019, ArXiv.

[15]  Nan Duan,et al.  Multi-Task Learning for Conversational Question Answering over a Large-Scale Knowledge Base , 2019, EMNLP.

[16]  Danqi Chen,et al.  CoQA: A Conversational Question Answering Challenge , 2018, TACL.

[17]  Ming-Wei Chang,et al.  Latent Retrieval for Weakly Supervised Open Domain Question Answering , 2019, ACL.

[18]  Ting Yao,et al.  Document Gated Reader for Open-Domain Question Answering , 2019, SIGIR.

[19]  Yiqun Liu,et al.  Human Behavior Inspired Machine Reading Comprehension , 2019, SIGIR.

[20]  Jordan Boyd-Graber,et al.  Can You Unpack That? Learning to Rewrite Questions-in-Context , 2019, EMNLP.

[21]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[22]  Rajarshi Das,et al.  Multi-step Retriever-Reader Interaction for Scalable Open-domain Question Answering , 2019, ICLR.

[23]  Gerhard Weikum,et al.  Look before you Hop: Conversational Question Answering over Knowledge Graphs Using Judicious Context Expansion , 2019, CIKM.

[24]  Jamie Callan,et al.  CAsT-19: A Dataset for Conversational Information Seeking , 2020, SIGIR.

[25]  W. Bruce Croft,et al.  Open-Retrieval Conversational Question Answering , 2020, SIGIR.

[26]  Maarten de Rijke,et al.  Conversations with Search Engines , 2020, ArXiv.

[27]  Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering , 2019, ICLR.

[28]  M. de Rijke,et al.  Query Resolution for Conversational Search with Limited Supervision , 2020, SIGIR.

[29]  Danqi Chen,et al.  Dense Passage Retrieval for Open-Domain Question Answering , 2020, EMNLP.

[30]  Kevin Gimpel,et al.  ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , 2019, ICLR.

[31]  Krisztian Balog,et al.  Summarizing and Exploring Tabular Data in Conversational Search , 2020, SIGIR.

[32]  W. Bruce Croft,et al.  Guided Transformer: Leveraging Multiple External Sources for Representation Learning in Conversational Search , 2020, SIGIR.

[33]  Wei-Cheng Chang,et al.  Pre-training Tasks for Embedding-based Large-scale Retrieval , 2020, ICLR.

[34]  Mohammed J. Zaki,et al.  GraphFlow: Exploiting Conversation Flow with Graph Neural Networks for Conversational Machine Comprehension , 2019, IJCAI.

[35]  Wenhan Xiong,et al.  Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval , 2020, ICLR.

[36]  S. Longpre,et al.  Question Rewriting for Conversational Question Answering , 2020, WSDM.