Exploring the Viability of Synthetic Audio Data for Audio-Based Dialogue State Tracking
Dialogue state tracking plays a crucial role in extracting information in task-oriented dialogue systems. However, preceding research are limited to textual modalities, primarily due to the shortage of authentic human audio datasets. We address this by investigating synthetic audio data for audio-ba...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Dialogue state tracking plays a crucial role in extracting information in
task-oriented dialogue systems. However, preceding research are limited to
textual modalities, primarily due to the shortage of authentic human audio
datasets. We address this by investigating synthetic audio data for audio-based
DST. To this end, we develop cascading and end-to-end models, train them with
our synthetic audio dataset, and test them on actual human speech data. To
facilitate evaluation tailored to audio modalities, we introduce a novel
PhonemeF1 to capture pronunciation similarity. Experimental results showed that
models trained solely on synthetic datasets can generalize their performance to
human voice data. By eliminating the dependency on human speech data
collection, these insights pave the way for significant practical advancements
in audio-based DST. Data and code are available at
https://github.com/JihyunLee1/E2E-DST. |
---|---|
DOI: | 10.48550/arxiv.2312.01842 |