Promptformer: Prompted Conformer Transducer for ASR
Context cues carry information which can improve multi-turn interactions in automatic speech recognition (ASR) systems. In this paper, we introduce a novel mechanism inspired by hyper-prompting to fuse textual context with acoustic representations in the attention mechanism. Results on a test set wi...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Context cues carry information which can improve multi-turn interactions in
automatic speech recognition (ASR) systems. In this paper, we introduce a novel
mechanism inspired by hyper-prompting to fuse textual context with acoustic
representations in the attention mechanism. Results on a test set with
multi-turn interactions show that our method achieves 5.9% relative word error
rate reduction (rWERR) over a strong baseline. We show that our method does not
degrade in the absence of context and leads to improvements even if the model
is trained without context. We further show that leveraging a pre-trained
sentence-piece model for context embedding generation can outperform an
external BERT model. |
---|---|
DOI: | 10.48550/arxiv.2401.07360 |