Exploring RNN-Transducer for Chinese Speech Recognition
End-to-end approaches have drawn much attention recently for significantly simplifying the construction of an automatic speech recognition (ASR) system. RNN transducer (RNN-T) is one of the popular end-to-end methods. Previous studies have shown that RNN-T is difficult to train and a very complex tr...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | End-to-end approaches have drawn much attention recently for significantly
simplifying the construction of an automatic speech recognition (ASR) system.
RNN transducer (RNN-T) is one of the popular end-to-end methods. Previous
studies have shown that RNN-T is difficult to train and a very complex training
process is needed for a reasonable performance. In this paper, we explore RNN-T
for a Chinese large vocabulary continuous speech recognition (LVCSR) task and
aim to simplify the training process while maintaining performance. First, a
new strategy of learning rate decay is proposed to accelerate the model
convergence. Second, we find that adding convolutional layers at the beginning
of the network and using ordered data can discard the pre-training process of
the encoder without loss of performance. Besides, we design experiments to find
a balance among the usage of GPU memory, training circle and model performance.
Finally, we achieve 16.9% character error rate (CER) on our test set which is
2% absolute improvement from a strong BLSTM CE system with language model
trained on the same text corpus. |
---|---|
DOI: | 10.48550/arxiv.1811.05097 |