Asking Clarification Questions in Knowledge-Based Question Answering

The ability to ask clarification questions is essential for knowledge-based question answering (KBQA) systems, especially for handling ambiguous phenomena. Despite its importance, clarification has not been well explored in current KBQA systems. Further progress requires supervised resources for training and evaluation, and powerful models for clarification-related text understanding and generation. In this paper, we construct a new clarification dataset, CLAQUA, with nearly 40K open-domain examples. The dataset supports three serial tasks: given a question, identify whether clarification is needed; if yes, generate a clarification question; then predict answers base on external user feedback. We provide representative baselines for these tasks and further introduce a coarse-to-fine model for clarification question generation. Experiments show that the proposed model achieves better performance than strong baselines. The further analysis demonstrates that our dataset brings new challenges and there still remain several unsolved problems, like reasonable automatic evaluation metrics for clarification question generation and powerful models for handling entity sparsity.

[1]  Ming Zhou,et al.  Question Generation from SQL Queries Improves Neural Semantic Parsing , 2018, EMNLP.

[2]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[3]  Jun Zhao,et al.  Recurrent Convolutional Neural Networks for Text Classification , 2015, AAAI.

[4]  Hal Daumé,et al.  Learning to Ask Good Questions: Ranking Clarification Questions using Neural Expected Value of Perfect Information , 2018, ACL.

[5]  Mitesh M. Khapra,et al.  Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph , 2018, AAAI.

[6]  Xiaoyan Zhu,et al.  Generating Informative Responses with Controlled Sentence Function , 2018, ACL.

[7]  Jason Weston,et al.  Dialogue Learning With Human-In-The-Loop , 2016, ICLR.

[8]  Mirella Lapata,et al.  Coarse-to-Fine Decoding for Neural Semantic Parsing , 2018, ACL.

[9]  Richard Socher,et al.  Ask Me Anything: Dynamic Memory Networks for Natural Language Processing , 2015, ICML.

[10]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[11]  Kuldip K. Paliwal,et al.  Bidirectional recurrent neural networks , 1997, IEEE Trans. Signal Process..

[12]  Diyi Yang,et al.  Hierarchical Attention Networks for Document Classification , 2016, NAACL.

[13]  Suresh Manandhar,et al.  An Analysis of Clarification Dialogue for Question Answering , 2003, NAACL.

[14]  Yoon Kim,et al.  Convolutional Neural Networks for Sentence Classification , 2014, EMNLP.

[15]  Huang Hu,et al.  Playing 20 Question Game with Policy-Based Reinforcement Learning , 2018, EMNLP.

[16]  Satinder Singh,et al.  Learning to Query, Reason, and Answer Questions On Ambiguous Texts , 2016, ICLR.

[17]  Julia Hirschberg,et al.  Towards Natural Clarification Questions in Dialogue Systems , 2014 .

[18]  C. Snow,et al.  Feedback to first language learners: the role of repetitions and clarification questions , 1986, Journal of Child Language.

[19]  Jianfeng Gao,et al.  Towards End-to-End Reinforcement Learning of Dialogue Agents for Information Access , 2016, ACL.

[20]  Eugene Agichtein,et al.  What Do You Mean Exactly?: Analyzing Clarification Questions in CQA , 2017, CHIIR.