Distilling the Knowledge of BERT for CTC-based ASR
Connectionist temporal classification (CTC) -based models are attractive because of their fast inference in automatic speech recognition (ASR). Language model (LM) integration approaches such as shallow fusion and rescoring can improve the recognition accuracy of CTC-based ASR by taking advantage of...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Connectionist temporal classification (CTC) -based models are attractive
because of their fast inference in automatic speech recognition (ASR). Language
model (LM) integration approaches such as shallow fusion and rescoring can
improve the recognition accuracy of CTC-based ASR by taking advantage of the
knowledge in text corpora. However, they significantly slow down the inference
of CTC. In this study, we propose to distill the knowledge of BERT for
CTC-based ASR, extending our previous study for attention-based ASR. CTC-based
ASR learns the knowledge of BERT during training and does not use BERT during
testing, which maintains the fast inference of CTC. Different from
attention-based models, CTC-based models make frame-level predictions, so they
need to be aligned with token-level predictions of BERT for distillation. We
propose to obtain alignments by calculating the most plausible CTC paths.
Experimental evaluations on the Corpus of Spontaneous Japanese (CSJ) and
TED-LIUM2 show that our method improves the performance of CTC-based ASR
without the cost of inference speed. |
---|---|
DOI: | 10.48550/arxiv.2209.02030 |