Coherent Dialog Generation with Query Graph

Learning to generate coherent and informative dialogs is an enduring challenge for open-domain conversation generation. Previous work leverage knowledge graph or documents to facilitate informative dialog generation, with little attention on dialog coherence. In this article, to enhance multi-turn o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACM transactions on Asian and low-resource language information processing 2021-11, Vol.20 (6), p.1-23
Hauptverfasser: Xu, Jun, Lei, Zeyang, Wang, Haifeng, Niu, Zheng-Yu, Wu, Hua, Che, Wanxiang, Huang, Jizhou, Liu, Ting
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Learning to generate coherent and informative dialogs is an enduring challenge for open-domain conversation generation. Previous work leverage knowledge graph or documents to facilitate informative dialog generation, with little attention on dialog coherence. In this article, to enhance multi-turn open-domain dialog coherence, we propose to leverage a new knowledge source, web search session data, to facilitate hierarchical knowledge sequence planning, which determines a sketch of a multi-turn dialog. Specifically, we formulate knowledge sequence planning or dialog policy learning as a graph grounded Reinforcement Learning (RL) problem. To this end, we first build a two-level query graph with queries as utterance-level vertices and their topics (entities in queries) as topic-level vertices. We then present a two-level dialog policy model that plans a high-level topic sequence and a low-level query sequence over the query graph to guide a knowledge aware response generator. In particular, to foster forward-looking knowledge planning decisions for better dialog coherence, we devise a heterogeneous graph neural network to incorporate neighbouring vertex information, or possible future RL action information, into each vertex (as an RL action) representation. Experiment results on two benchmark dialog datasets demonstrate that our framework can outperform strong baselines in terms of dialog coherence, informativeness, and engagingness.
ISSN:2375-4699
2375-4702
DOI:10.1145/3462551