Towards Personalization of CTC Speech Recognition Models with Contextual Adapters and Adaptive Boosting
End-to-end speech recognition models trained using joint Connectionist Temporal Classification (CTC)-Attention loss have gained popularity recently. In these models, a non-autoregressive CTC decoder is often used at inference time due to its speed and simplicity. However, such models are hard to per...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | End-to-end speech recognition models trained using joint Connectionist
Temporal Classification (CTC)-Attention loss have gained popularity recently.
In these models, a non-autoregressive CTC decoder is often used at inference
time due to its speed and simplicity. However, such models are hard to
personalize because of their conditional independence assumption that prevents
output tokens from previous time steps to influence future predictions. To
tackle this, we propose a novel two-way approach that first biases the encoder
with attention over a predefined list of rare long-tail and out-of-vocabulary
(OOV) words and then uses dynamic boosting and phone alignment network during
decoding to further bias the subword predictions. We evaluate our approach on
open-source VoxPopuli and in-house medical datasets to showcase a 60%
improvement in F1 score on domain-specific rare words over a strong CTC
baseline. |
---|---|
DOI: | 10.48550/arxiv.2210.09510 |