Inter-KD: Intermediate Knowledge Distillation for CTC-Based Automatic Speech Recognition
Recently, the advance in deep learning has brought a considerable improvement in the end-to-end speech recognition field, simplifying the traditional pipeline while producing promising results. Among the end-to-end models, the connectionist temporal classification (CTC)-based model has attracted res...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, the advance in deep learning has brought a considerable improvement
in the end-to-end speech recognition field, simplifying the traditional
pipeline while producing promising results. Among the end-to-end models, the
connectionist temporal classification (CTC)-based model has attracted research
interest due to its non-autoregressive nature. However, such CTC models require
a heavy computational cost to achieve outstanding performance. To mitigate the
computational burden, we propose a simple yet effective knowledge distillation
(KD) for the CTC framework, namely Inter-KD, that additionally transfers the
teacher's knowledge to the intermediate CTC layers of the student network. From
the experimental results on the LibriSpeech, we verify that the Inter-KD shows
better achievements compared to the conventional KD methods. Without using any
language model (LM) and data augmentation, Inter-KD improves the word error
rate (WER) performance from 8.85 % to 6.30 % on the test-clean. |
---|---|
DOI: | 10.48550/arxiv.2211.15075 |