Improved TDNNs using Deep Kernels and Frequency Dependent Grid-RNNs
Time delay neural networks (TDNNs) are an effective acoustic model for large vocabulary speech recognition. The strength of the model can be attributed to its ability to effectively model long temporal contexts. However, current TDNN models are relatively shallow, which limits the modelling capabili...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Time delay neural networks (TDNNs) are an effective acoustic model for large
vocabulary speech recognition. The strength of the model can be attributed to
its ability to effectively model long temporal contexts. However, current TDNN
models are relatively shallow, which limits the modelling capability. This
paper proposes a method of increasing the network depth by deepening the kernel
used in the TDNN temporal convolutions. The best performing kernel consists of
three fully connected layers with a residual (ResNet) connection from the
output of the first to the output of the third. The addition of
spectro-temporal processing as the input to the TDNN in the form of a
convolutional neural network (CNN) and a newly designed Grid-RNN was
investigated. The Grid-RNN strongly outperforms a CNN if different sets of
parameters for different frequency bands are used and can be further enhanced
by using a bi-directional Grid-RNN. Experiments using the multi-genre broadcast
(MGB3) English data (275h) show that deep kernel TDNNs reduces the word error
rate (WER) by 6% relative and when combined with the frequency dependent
Grid-RNN gives a relative WER reduction of 9%. |
---|---|
DOI: | 10.48550/arxiv.1802.06412 |