UniverSLU: Universal Spoken Language Understanding for Diverse Tasks with Natural Language Instructions
Recent studies leverage large language models with multi-tasking capabilities, using natural language prompts to guide the model's behavior and surpassing performance of task-specific models. Motivated by this, we ask: can we build a single model that jointly performs various spoken language un...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent studies leverage large language models with multi-tasking
capabilities, using natural language prompts to guide the model's behavior and
surpassing performance of task-specific models. Motivated by this, we ask: can
we build a single model that jointly performs various spoken language
understanding (SLU) tasks? We start by adapting a pre-trained automatic speech
recognition model to additional tasks using single-token task specifiers. We
enhance this approach through instruction tuning, i.e., finetuning by
describing the task using natural language instructions followed by the list of
label options. Our approach can generalize to new task descriptions for the
seen tasks during inference, thereby enhancing its user-friendliness. We
demonstrate the efficacy of our single multi-task learning model "UniverSLU"
for 12 speech classification and sequence generation task types spanning 17
datasets and 9 languages. On most tasks, UniverSLU achieves competitive
performance and often even surpasses task-specific models. Additionally, we
assess the zero-shot capabilities, finding that the model generalizes to new
datasets and languages for seen task types. |
---|---|
DOI: | 10.48550/arxiv.2310.02973 |