Dialogue Distillation: Open-Domain Dialogue Augmentation Using Unpaired Data
Recent advances in open-domain dialogue systems rely on the success of neural models that are trained on large-scale data. However, collecting large-scale dialogue data is usually time-consuming and labor-intensive. To address this data dilemma, we propose a novel data augmentation method for traini...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent advances in open-domain dialogue systems rely on the success of neural
models that are trained on large-scale data. However, collecting large-scale
dialogue data is usually time-consuming and labor-intensive. To address this
data dilemma, we propose a novel data augmentation method for training
open-domain dialogue models by utilizing unpaired data. Specifically, a
data-level distillation process is first proposed to construct augmented
dialogues where both post and response are retrieved from the unpaired data. A
ranking module is employed to filter out low-quality dialogues. Further, a
model-level distillation process is employed to distill a teacher model trained
on high-quality paired data to augmented dialogue pairs, thereby preventing
dialogue models from being affected by the noise in the augmented data.
Automatic and manual evaluation indicates that our method can produce
high-quality dialogue pairs with diverse contents, and the proposed data-level
and model-level dialogue distillation can improve the performance of
competitive baselines. |
---|---|
DOI: | 10.48550/arxiv.2009.09427 |