Semi-Supervised Knowledge-Grounded Pre-training for Task-Oriented Dialog Systems
Recent advances in neural approaches greatly improve task-oriented dialogue (TOD) systems which assist users to accomplish their goals. However, such systems rely on costly manually labeled dialogs which are not available in practical scenarios. In this paper, we present our models for Track 2 of th...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent advances in neural approaches greatly improve task-oriented dialogue
(TOD) systems which assist users to accomplish their goals. However, such
systems rely on costly manually labeled dialogs which are not available in
practical scenarios. In this paper, we present our models for Track 2 of the
SereTOD 2022 challenge, which is the first challenge of building
semi-supervised and reinforced TOD systems on a large-scale real-world Chinese
TOD dataset MobileCS. We build a knowledge-grounded dialog model to formulate
dialog history and local KB as input and predict the system response. And we
perform semi-supervised pre-training both on the labeled and unlabeled data.
Our system achieves the first place both in the automatic evaluation and human
interaction, especially with higher BLEU (+7.64) and Success (+13.6\%) than the
second place. |
---|---|
DOI: | 10.48550/arxiv.2210.08873 |