An Adaptive Framework for Conversational Question Answering

In Conversational Question Answering (CoQA), humans propose a series of questions to satisfy their information needs. Based on our preliminary analysis, there are two major types of questions, namely verification questions and knowledgeseeking questions. The first one is to verify some existing facts, while the latter is to obtain new knowledge about some specific object. These two types of questions differ significantly in their answering ways. However, existing methods usually treat them uniformly, which may easily be biased by the dominant type of questions and obtain inferior overall performance. In this work, we propose an adaptive framework to handle these two types of questions in different ways based on their own characteristics. We conduct experiments on the recently released CoQA benchmark dataset, and the results demonstrate that our method outperforms the state-of-the-art baseline methods.