Phoneme classification in hardware implemented neural networks
Among speech researchers, it is widely believed that Hidden Markov Models (HMMs) are the most successful modelling approaches for acoustic events in speech recognition. However, common assumptions limit the classification abilities of HMMs and these can been relaxed by introducing neural networks in...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Among speech researchers, it is widely believed that Hidden Markov Models (HMMs) are the most successful modelling approaches for acoustic events in speech recognition. However, common assumptions limit the classification abilities of HMMs and these can been relaxed by introducing neural networks in the HMM framework. With today's advances in VLSI technology, artificial neural networks (ANNs) can be integrated into a single chip offering adequate circuit complexity required to attain both a high recognition accuracy and an improved learning time. Analogue implementations are considered due to the high processing speeds. The relative performance of different speech coding parameters for use with two different ANN architectures that lend themselves to analogue hardware implementations are investigated. In this case, the dynamic ranges of the different coefficients need to be taken into consideration since they will affect the performance of the analogue chip due to the scaling of the coefficients to voltage signals. The hardware requirements for implementing the two architectures are then discussed. |
---|---|
DOI: | 10.1109/ICECS.2001.957783 |