A Dynamic Answering Path Based Fusion Model for KGQA

The Knowledge Graph Question Answering (KGQA) task is useful for information retrieval systems, intelligent customer service systems, etc., which has attracted the attention of a large number of researchers. Although the performance of KGQA has been further improved by introducing the Deep Learning models, there are still some difficulties to be solved, such as the representation of questions and answers, the efficient construction way of candidate path set, etc. In this paper, we propose a complete approach for KGQA task. Firstly, we devise a novel candidate path generation process, which effectively improves computation performance by reducing the number of candidate paths corresponding to a question and at the same time guarantees the accuracy of results. Secondly, considering the textual expression diversity of questions and stochastic of candidate paths, we present four models to learn semantic features of Chinese sequence with different focuses. Finally, in order to combine the advantages of each presented model, we propose a dedicated fusion policy which can get the most suitable path from the path set predicted by our presented models. We conduct experiments on Chinese Knowledge Base Question Answering (CKBQA) dataset. Experiment results show that our approach achieves better performance than the best one published in CCKS2019 competition.

[1]  Xuchen Yao,et al.  Information Extraction over Structured Data: Question Answering with Freebase , 2014, ACL.

[2]  Andrew McCallum,et al.  Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data , 2001, ICML.

[3]  Dongyan Zhao,et al.  Enhancing Freebase Question Answering Using Textual Evidence , 2016, ArXiv.

[4]  Le Song,et al.  Variational Reasoning for Question Answering with Knowledge Graph , 2017, AAAI.

[5]  Dan Klein,et al.  Learning Dependency-Based Compositional Semantics , 2011, CL.

[6]  Meng Wang,et al.  Leveraging Knowledge Graph Embeddings for Natural Language Question Answering , 2019, DASFAA.

[7]  Alexander Yates,et al.  Large-scale Semantic Parsing via Schema Matching and Lexicon Extension , 2013, ACL.

[8]  Yiming Yang,et al.  XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.

[9]  Xifeng Yan,et al.  DialSQL: Dialogue Based Structured Query Generation , 2018, ACL.

[10]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[11]  Jeffrey Dean,et al.  Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.

[12]  Lei Zou,et al.  Answering Natural Language Questions by Subgraph Matching over Knowledge Graphs , 2018, IEEE Transactions on Knowledge and Data Engineering.

[13]  Jason Weston,et al.  Open Question Answering with Weakly Supervised Embedding Models , 2014, ECML/PKDD.

[14]  Jason Weston,et al.  Question Answering with Subgraph Embeddings , 2014, EMNLP.

[15]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[16]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[17]  Alexander M. Rush,et al.  Sequence-to-Sequence Learning as Beam-Search Optimization , 2016, EMNLP.

[18]  Hua Wu,et al.  An End-to-End Model for Question Answering over Knowledge Base with Cross-Attention Combining Global Knowledge , 2017, ACL.

[19]  Christopher D. Manning,et al.  Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks , 2015, ACL.

[20]  Mark Steedman,et al.  Transforming Dependency Structures to Logical Forms for Semantic Parsing , 2016, TACL.

[21]  Ming Zhou,et al.  Dialog-to-Action: Conversational Question Answering Over a Large-Scale Knowledge Base , 2018, NeurIPS.

[22]  Ming Zhou,et al.  Question Answering over Freebase with Multi-Column Convolutional Neural Networks , 2015, ACL.