Integrating Pretrained ASR and LM to Perform Sequence Generation for Spoken Language Understanding
There has been an increased interest in the integration of pretrained speech recognition (ASR) and language models (LM) into the SLU framework. However, prior methods often struggle with a vocabulary mismatch between pretrained models, and LM cannot be directly utilized as they diverge from its NLU...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | There has been an increased interest in the integration of pretrained speech
recognition (ASR) and language models (LM) into the SLU framework. However,
prior methods often struggle with a vocabulary mismatch between pretrained
models, and LM cannot be directly utilized as they diverge from its NLU
formulation. In this study, we propose a three-pass end-to-end (E2E) SLU system
that effectively integrates ASR and LM subnetworks into the SLU formulation for
sequence generation tasks. In the first pass, our architecture predicts ASR
transcripts using the ASR subnetwork. This is followed by the LM subnetwork,
which makes an initial SLU prediction. Finally, in the third pass, the
deliberation subnetwork conditions on representations from the ASR and LM
subnetworks to make the final prediction. Our proposed three-pass SLU system
shows improved performance over cascaded and E2E SLU models on two benchmark
SLU datasets, SLURP and SLUE, especially on acoustically challenging
utterances. |
---|---|
DOI: | 10.48550/arxiv.2307.11005 |