Dirichlet Mixture Models of neural net posteriors for HMM-based speech recognition
In this paper, we present a novel technique for modeling the posterior probability estimates obtained from a neural net work directly in the HMM framework using the Dirichlet Mixture Models (DMMs). Since posterior probability vectors lie on a probability simplex their distribution can be modeled usi...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we present a novel technique for modeling the posterior probability estimates obtained from a neural net work directly in the HMM framework using the Dirichlet Mixture Models (DMMs). Since posterior probability vectors lie on a probability simplex their distribution can be modeled using DMMs. Being in an exponential family, the parameters of DMMs can be estimated in an efficient manner. Conventional approaches like TANDEM attempt to gaussianize the posteriors by suitable transforms and model them using Gaussian Mixture Models (GMMs). This requires more number of parameters as it does not exploit the fact that the probability vectors lie on a simplex. We demonstrate through TIMIT phoneme recognition experiments that the proposed technique outperforms the conventional TANDEM approach. |
---|---|
ISSN: | 1520-6149 2379-190X |
DOI: | 10.1109/ICASSP.2011.5947486 |