Conformer with dual-mode chunked attention for joint online and offline ASR
In this paper, we present an in-depth study on online attention mechanisms and distillation techniques for dual-mode (i.e., joint online and offline) ASR using the Conformer Transducer. In the dual-mode Conformer Transducer model, layers can function in online or offline mode while sharing parameter...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we present an in-depth study on online attention mechanisms
and distillation techniques for dual-mode (i.e., joint online and offline) ASR
using the Conformer Transducer. In the dual-mode Conformer Transducer model,
layers can function in online or offline mode while sharing parameters, and
in-place knowledge distillation from offline to online mode is applied in
training to improve online accuracy. In our study, we first demonstrate
accuracy improvements from using chunked attention in the Conformer encoder
compared to autoregressive attention with and without lookahead. Furthermore,
we explore the efficient KLD and 1-best KLD losses with different shifts
between online and offline outputs in the knowledge distillation. Finally, we
show that a simplified dual-mode Conformer that only has mode-specific
self-attention performs equally well as the one also having mode-specific
convolutions and normalization. Our experiments are based on two very different
datasets: the Librispeech task and an internal corpus of medical conversations.
Results show that the proposed dual-mode system using chunked attention yields
5% and 4% relative WER improvement on the Librispeech and medical tasks,
compared to the dual-mode system using autoregressive attention with similar
average lookahead. |
---|---|
DOI: | 10.48550/arxiv.2206.11157 |