Coherent Dialog Generation with Query Graph

Learning to generate coherent and informative dialogs is an enduring challenge for open-domain conversation generation. Previous work leverage knowledge graph or documents to facilitate informative dialog generation, with little attention on dialog coherence. In this article, to enhance multi-turn open-domain dialog coherence, we propose to leverage a new knowledge source, web search session data, to facilitate hierarchical knowledge sequence planning, which determines a sketch of a multi-turn dialog. Specifically, we formulate knowledge sequence planning or dialog policy learning as a graph grounded Reinforcement Learning (RL) problem. To this end, we first build a two-level query graph with queries as utterance-level vertices and their topics (entities in queries) as topic-level vertices. We then present a two-level dialog policy model that plans a high-level topic sequence and a low-level query sequence over the query graph to guide a knowledge aware response generator. In particular, to foster forward-looking knowledge planning decisions for better dialog coherence, we devise a heterogeneous graph neural network to incorporate neighbouring vertex information, or possible future RL action information, into each vertex (as an RL action) representation. Experiment results on two benchmark dialog datasets demonstrate that our framework can outperform strong baselines in terms of dialog coherence, informativeness, and engagingness.