Automatic Emotion Recognition from Speech Using Artificial Neural Networks with Gender-Dependent Databases

Automatic Emotion Recognition (AER) from speech is one of the most important sub domains in affective computing. We have created and analyzed two emotional speech databases from male and female speech. Instead of using the phonetic and prosodic features we have used the Discrete Wavelet Transform (D...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Firoz, S.A., Raji, S.A., Babu, A.P.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 164
container_issue
container_start_page 162
container_title
container_volume
creator Firoz, S.A.
Raji, S.A.
Babu, A.P.
description Automatic Emotion Recognition (AER) from speech is one of the most important sub domains in affective computing. We have created and analyzed two emotional speech databases from male and female speech. Instead of using the phonetic and prosodic features we have used the Discrete Wavelet Transform (DWT) technique for feature vector creation. Artificial neural network is used for pattern classification and recognition. We obtained a recognition accuracy of 72.055% in case of male speech database and 65.5% recognition in case of female speech database. Malayalam (one of the South Indian languages) was chosen for the experiment. We have recognized the four emotions neutral, happy, sad and anger by using Discrete Wavelet Transforms (DWT) and Artificial Neural Network (ANN) and the performance for the two databases are compared.
doi_str_mv 10.1109/ACT.2009.49
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_5376782</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>5376782</ieee_id><sourcerecordid>5376782</sourcerecordid><originalsourceid>FETCH-LOGICAL-i90t-44ceed50f12abf660813b2f987e4d66249825f0f7f63d789fbb256d4f1b0c1493</originalsourceid><addsrcrecordid>eNotzMtOAjEYQOEaY6IgK5du-gKDvXe6nAyIJkQTxTVpZ_5ClbmkLSG-vRFdnW91ELqjZE4pMQ9VvZkzQsxcmAs0IVoZyQ2V-hJNqGBCSM6oukazlIIjTGklpdY36LM65qGzOTR42Q05DD1-g2bY9eFsH4cOv48AzR5_pNDvcBVz8KEJ9oBf4BjPyachfiV8CnmPV9C3EIsFjL_oM17YbJ1NkG7RlbeHBLP_TtHmcbmpn4r16-q5rtZFMCQXQjQArSSeMuu8UqSk3DFvSg2iVYoJUzLpidde8VaXxjvHpGqFp440VBg-Rfd_2wAA2zGGzsbvreRa6ZLxHxQXV_4</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Automatic Emotion Recognition from Speech Using Artificial Neural Networks with Gender-Dependent Databases</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Firoz, S.A. ; Raji, S.A. ; Babu, A.P.</creator><creatorcontrib>Firoz, S.A. ; Raji, S.A. ; Babu, A.P.</creatorcontrib><description>Automatic Emotion Recognition (AER) from speech is one of the most important sub domains in affective computing. We have created and analyzed two emotional speech databases from male and female speech. Instead of using the phonetic and prosodic features we have used the Discrete Wavelet Transform (DWT) technique for feature vector creation. Artificial neural network is used for pattern classification and recognition. We obtained a recognition accuracy of 72.055% in case of male speech database and 65.5% recognition in case of female speech database. Malayalam (one of the South Indian languages) was chosen for the experiment. We have recognized the four emotions neutral, happy, sad and anger by using Discrete Wavelet Transforms (DWT) and Artificial Neural Network (ANN) and the performance for the two databases are compared.</description><identifier>ISBN: 1424453216</identifier><identifier>ISBN: 9781424453214</identifier><identifier>EISBN: 0769539157</identifier><identifier>EISBN: 9780769539157</identifier><identifier>DOI: 10.1109/ACT.2009.49</identifier><language>eng</language><publisher>IEEE</publisher><subject>Affective Computing ; Artificial neural networks ; Automatic Emotion Recognition ; Discrete Wavelet Transform ; Discrete wavelet transforms ; Emotion recognition ; Filter bank ; Filtering ; Frequency ; Humans ; Low pass filters ; Multi Layer Perceptron ; Speech</subject><ispartof>2009 International Conference on Advances in Computing, Control, and Telecommunication Technologies, 2009, p.162-164</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/5376782$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,2056,27924,54919</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/5376782$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Firoz, S.A.</creatorcontrib><creatorcontrib>Raji, S.A.</creatorcontrib><creatorcontrib>Babu, A.P.</creatorcontrib><title>Automatic Emotion Recognition from Speech Using Artificial Neural Networks with Gender-Dependent Databases</title><title>2009 International Conference on Advances in Computing, Control, and Telecommunication Technologies</title><addtitle>ACT</addtitle><description>Automatic Emotion Recognition (AER) from speech is one of the most important sub domains in affective computing. We have created and analyzed two emotional speech databases from male and female speech. Instead of using the phonetic and prosodic features we have used the Discrete Wavelet Transform (DWT) technique for feature vector creation. Artificial neural network is used for pattern classification and recognition. We obtained a recognition accuracy of 72.055% in case of male speech database and 65.5% recognition in case of female speech database. Malayalam (one of the South Indian languages) was chosen for the experiment. We have recognized the four emotions neutral, happy, sad and anger by using Discrete Wavelet Transforms (DWT) and Artificial Neural Network (ANN) and the performance for the two databases are compared.</description><subject>Affective Computing</subject><subject>Artificial neural networks</subject><subject>Automatic Emotion Recognition</subject><subject>Discrete Wavelet Transform</subject><subject>Discrete wavelet transforms</subject><subject>Emotion recognition</subject><subject>Filter bank</subject><subject>Filtering</subject><subject>Frequency</subject><subject>Humans</subject><subject>Low pass filters</subject><subject>Multi Layer Perceptron</subject><subject>Speech</subject><isbn>1424453216</isbn><isbn>9781424453214</isbn><isbn>0769539157</isbn><isbn>9780769539157</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2009</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotzMtOAjEYQOEaY6IgK5du-gKDvXe6nAyIJkQTxTVpZ_5ClbmkLSG-vRFdnW91ELqjZE4pMQ9VvZkzQsxcmAs0IVoZyQ2V-hJNqGBCSM6oukazlIIjTGklpdY36LM65qGzOTR42Q05DD1-g2bY9eFsH4cOv48AzR5_pNDvcBVz8KEJ9oBf4BjPyachfiV8CnmPV9C3EIsFjL_oM17YbJ1NkG7RlbeHBLP_TtHmcbmpn4r16-q5rtZFMCQXQjQArSSeMuu8UqSk3DFvSg2iVYoJUzLpidde8VaXxjvHpGqFp440VBg-Rfd_2wAA2zGGzsbvreRa6ZLxHxQXV_4</recordid><startdate>200912</startdate><enddate>200912</enddate><creator>Firoz, S.A.</creator><creator>Raji, S.A.</creator><creator>Babu, A.P.</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>200912</creationdate><title>Automatic Emotion Recognition from Speech Using Artificial Neural Networks with Gender-Dependent Databases</title><author>Firoz, S.A. ; Raji, S.A. ; Babu, A.P.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i90t-44ceed50f12abf660813b2f987e4d66249825f0f7f63d789fbb256d4f1b0c1493</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2009</creationdate><topic>Affective Computing</topic><topic>Artificial neural networks</topic><topic>Automatic Emotion Recognition</topic><topic>Discrete Wavelet Transform</topic><topic>Discrete wavelet transforms</topic><topic>Emotion recognition</topic><topic>Filter bank</topic><topic>Filtering</topic><topic>Frequency</topic><topic>Humans</topic><topic>Low pass filters</topic><topic>Multi Layer Perceptron</topic><topic>Speech</topic><toplevel>online_resources</toplevel><creatorcontrib>Firoz, S.A.</creatorcontrib><creatorcontrib>Raji, S.A.</creatorcontrib><creatorcontrib>Babu, A.P.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Firoz, S.A.</au><au>Raji, S.A.</au><au>Babu, A.P.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Automatic Emotion Recognition from Speech Using Artificial Neural Networks with Gender-Dependent Databases</atitle><btitle>2009 International Conference on Advances in Computing, Control, and Telecommunication Technologies</btitle><stitle>ACT</stitle><date>2009-12</date><risdate>2009</risdate><spage>162</spage><epage>164</epage><pages>162-164</pages><isbn>1424453216</isbn><isbn>9781424453214</isbn><eisbn>0769539157</eisbn><eisbn>9780769539157</eisbn><abstract>Automatic Emotion Recognition (AER) from speech is one of the most important sub domains in affective computing. We have created and analyzed two emotional speech databases from male and female speech. Instead of using the phonetic and prosodic features we have used the Discrete Wavelet Transform (DWT) technique for feature vector creation. Artificial neural network is used for pattern classification and recognition. We obtained a recognition accuracy of 72.055% in case of male speech database and 65.5% recognition in case of female speech database. Malayalam (one of the South Indian languages) was chosen for the experiment. We have recognized the four emotions neutral, happy, sad and anger by using Discrete Wavelet Transforms (DWT) and Artificial Neural Network (ANN) and the performance for the two databases are compared.</abstract><pub>IEEE</pub><doi>10.1109/ACT.2009.49</doi><tpages>3</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISBN: 1424453216
ispartof 2009 International Conference on Advances in Computing, Control, and Telecommunication Technologies, 2009, p.162-164
issn
language eng
recordid cdi_ieee_primary_5376782
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Affective Computing
Artificial neural networks
Automatic Emotion Recognition
Discrete Wavelet Transform
Discrete wavelet transforms
Emotion recognition
Filter bank
Filtering
Frequency
Humans
Low pass filters
Multi Layer Perceptron
Speech
title Automatic Emotion Recognition from Speech Using Artificial Neural Networks with Gender-Dependent Databases
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T06%3A35%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Automatic%20Emotion%20Recognition%20from%20Speech%20Using%20Artificial%20Neural%20Networks%20with%20Gender-Dependent%20Databases&rft.btitle=2009%20International%20Conference%20on%20Advances%20in%20Computing,%20Control,%20and%20Telecommunication%20Technologies&rft.au=Firoz,%20S.A.&rft.date=2009-12&rft.spage=162&rft.epage=164&rft.pages=162-164&rft.isbn=1424453216&rft.isbn_list=9781424453214&rft_id=info:doi/10.1109/ACT.2009.49&rft_dat=%3Cieee_6IE%3E5376782%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=0769539157&rft.eisbn_list=9780769539157&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=5376782&rfr_iscdi=true