Automatic Emotion Recognition from Speech Using Artificial Neural Networks with Gender-Dependent Databases
Automatic Emotion Recognition (AER) from speech is one of the most important sub domains in affective computing. We have created and analyzed two emotional speech databases from male and female speech. Instead of using the phonetic and prosodic features we have used the Discrete Wavelet Transform (D...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Automatic Emotion Recognition (AER) from speech is one of the most important sub domains in affective computing. We have created and analyzed two emotional speech databases from male and female speech. Instead of using the phonetic and prosodic features we have used the Discrete Wavelet Transform (DWT) technique for feature vector creation. Artificial neural network is used for pattern classification and recognition. We obtained a recognition accuracy of 72.055% in case of male speech database and 65.5% recognition in case of female speech database. Malayalam (one of the South Indian languages) was chosen for the experiment. We have recognized the four emotions neutral, happy, sad and anger by using Discrete Wavelet Transforms (DWT) and Artificial Neural Network (ANN) and the performance for the two databases are compared. |
---|---|
DOI: | 10.1109/ACT.2009.49 |