Exploiting sparsity in stranded hidden Markov models for automatic speech recognition

We have recently proposed the stranded HMM to achieve a more accurate representation of heterogeneous data. As opposed to the regular Gaussian mixture HMM, the stranded HMM explicitly models the relationships among the mixture components. The transitions among mixture components encode possible traj...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yong Zhao, Biing-Hwang Juang
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We have recently proposed the stranded HMM to achieve a more accurate representation of heterogeneous data. As opposed to the regular Gaussian mixture HMM, the stranded HMM explicitly models the relationships among the mixture components. The transitions among mixture components encode possible trajectories of acoustic features for speech units. Accurately representing the underlying transition structure is crucial for the stranded HMM to produce an optimal recognition performance. In this paper, we propose to learn the stranded HMM structure by imposing sparsity constraints. In particular, entropic priors are incorporated in the maximum a posteriori (MAP) estimation of the mixture transition matrices. The experimental results showed that a significant improvement in model sparsity can be obtained with a slight sacrifice of the recognition accuracy.
ISSN:1058-6393
2576-2303
DOI:10.1109/ACSSC.2012.6489305