Generating More Interesting Responses in Neural Conversation Models with Distributional Constraints
Neural conversation models tend to generate safe, generic responses for most inputs. This is due to the limitations of likelihood-based decoding objectives in generation tasks with diverse outputs, such as conversation. To address this challenge, we propose a simple yet effective approach for incorp...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Neural conversation models tend to generate safe, generic responses for most
inputs. This is due to the limitations of likelihood-based decoding objectives
in generation tasks with diverse outputs, such as conversation. To address this
challenge, we propose a simple yet effective approach for incorporating side
information in the form of distributional constraints over the generated
responses. We propose two constraints that help generate more content rich
responses that are based on a model of syntax and topics (Griffiths et al.,
2005) and semantic similarity (Arora et al., 2016). We evaluate our approach
against a variety of competitive baselines, using both automatic metrics and
human judgments, showing that our proposed approach generates responses that
are much less generic without sacrificing plausibility. A working demo of our
code can be found at https://github.com/abaheti95/DC-NeuralConversation. |
---|---|
DOI: | 10.48550/arxiv.1809.01215 |