Artificial Neural Network based Emotion Classification and Recognition from Speech

Emotion recognition from speech signals is still a challenging task. Hence, proposing an efficient and accurate technique for speech-based emotion recognition is also an important task. This study is focused on four basic human emotions (sad, angry, happy, and normal) recognition using an artificial...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of advanced computer science & applications 2020-12, Vol.11 (12)
Hauptverfasser: Iqbal, Mudasser, Ali, Syed, Abid, Muhammad, Majeed, Furqan, Ali, Ans
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 12
container_start_page
container_title International journal of advanced computer science & applications
container_volume 11
creator Iqbal, Mudasser
Ali, Syed
Abid, Muhammad
Majeed, Furqan
Ali, Ans
description Emotion recognition from speech signals is still a challenging task. Hence, proposing an efficient and accurate technique for speech-based emotion recognition is also an important task. This study is focused on four basic human emotions (sad, angry, happy, and normal) recognition using an artificial neural network that can be detected through vocal expressions resulting in more efficient and productive machine behaviors. An effective model based on a Bayesian regularized artificial neural network (BRANN) is proposed in this study for speech-based emotion recognition. The experiments are conducted on a well-known Berlin database having 1470 speech samples carrying basic emotions with 500 samples of angry emotions, 300 samples of happy emotions, 350 samples of a neutral state, and 320 samples of sad emotions. The four features Frequency, Pitch, Amplitude, and formant of speech is used to recognize four basic emotions from speech. The performance of the proposed methodology is compared with the performance of state-of-the-art methodologies used for emotion recognition from speech. The proposed methodology achieved 95% accuracy of emotion recognition which is highest as compared to other states of the art techniques in the relevant domain.
doi_str_mv 10.14569/IJACSA.2020.0111253
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2655121764</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2655121764</sourcerecordid><originalsourceid>FETCH-LOGICAL-c325t-e9ee5820d4cbce2e6986809f533eaf775c125b9b60d6cb9a4e9b5df61e1f22a53</originalsourceid><addsrcrecordid>eNotkF9LwzAUxYMoOOa-gQ8Fnzvzp0mbx1KmTobCpuBbSNMb7VybmnSI396u3X0598DhXs4PoVuClyThQt6vn_Nily8ppniJCSGUsws0o4SLmPMUX457FhOcflyjRQh7PAyTVGRshra572tbm1ofohc4-lH6X-e_o1IHqKJV4_ratVFx0CGcknq0uq2iLRj32dajt9410a4DMF836MrqQ4DFWefo_WH1VjzFm9fHdZFvYsMo72OQADyjuEpMaYCCkJnIsLScMdA2TbkZmpSyFLgSppQ6AVnyygoCxFKqOZuju-lu593PEUKv9u7o2-GlooJzQkkqkiGVTCnjXQgerOp83Wj_pwhWI0A1AVQngOoMkP0DldxkLg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2655121764</pqid></control><display><type>article</type><title>Artificial Neural Network based Emotion Classification and Recognition from Speech</title><source>EZB Electronic Journals Library</source><creator>Iqbal, Mudasser ; Ali, Syed ; Abid, Muhammad ; Majeed, Furqan ; Ali, Ans</creator><creatorcontrib>Iqbal, Mudasser ; Ali, Syed ; Abid, Muhammad ; Majeed, Furqan ; Ali, Ans</creatorcontrib><description>Emotion recognition from speech signals is still a challenging task. Hence, proposing an efficient and accurate technique for speech-based emotion recognition is also an important task. This study is focused on four basic human emotions (sad, angry, happy, and normal) recognition using an artificial neural network that can be detected through vocal expressions resulting in more efficient and productive machine behaviors. An effective model based on a Bayesian regularized artificial neural network (BRANN) is proposed in this study for speech-based emotion recognition. The experiments are conducted on a well-known Berlin database having 1470 speech samples carrying basic emotions with 500 samples of angry emotions, 300 samples of happy emotions, 350 samples of a neutral state, and 320 samples of sad emotions. The four features Frequency, Pitch, Amplitude, and formant of speech is used to recognize four basic emotions from speech. The performance of the proposed methodology is compared with the performance of state-of-the-art methodologies used for emotion recognition from speech. The proposed methodology achieved 95% accuracy of emotion recognition which is highest as compared to other states of the art techniques in the relevant domain.</description><identifier>ISSN: 2158-107X</identifier><identifier>EISSN: 2156-5570</identifier><identifier>DOI: 10.14569/IJACSA.2020.0111253</identifier><language>eng</language><publisher>West Yorkshire: Science and Information (SAI) Organization Limited</publisher><subject>Artificial neural networks ; Emotion recognition ; Emotions ; Neural networks ; Speech ; Speech recognition</subject><ispartof>International journal of advanced computer science &amp; applications, 2020-12, Vol.11 (12)</ispartof><rights>2020. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c325t-e9ee5820d4cbce2e6986809f533eaf775c125b9b60d6cb9a4e9b5df61e1f22a53</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,777,781,27905,27906</link.rule.ids></links><search><creatorcontrib>Iqbal, Mudasser</creatorcontrib><creatorcontrib>Ali, Syed</creatorcontrib><creatorcontrib>Abid, Muhammad</creatorcontrib><creatorcontrib>Majeed, Furqan</creatorcontrib><creatorcontrib>Ali, Ans</creatorcontrib><title>Artificial Neural Network based Emotion Classification and Recognition from Speech</title><title>International journal of advanced computer science &amp; applications</title><description>Emotion recognition from speech signals is still a challenging task. Hence, proposing an efficient and accurate technique for speech-based emotion recognition is also an important task. This study is focused on four basic human emotions (sad, angry, happy, and normal) recognition using an artificial neural network that can be detected through vocal expressions resulting in more efficient and productive machine behaviors. An effective model based on a Bayesian regularized artificial neural network (BRANN) is proposed in this study for speech-based emotion recognition. The experiments are conducted on a well-known Berlin database having 1470 speech samples carrying basic emotions with 500 samples of angry emotions, 300 samples of happy emotions, 350 samples of a neutral state, and 320 samples of sad emotions. The four features Frequency, Pitch, Amplitude, and formant of speech is used to recognize four basic emotions from speech. The performance of the proposed methodology is compared with the performance of state-of-the-art methodologies used for emotion recognition from speech. The proposed methodology achieved 95% accuracy of emotion recognition which is highest as compared to other states of the art techniques in the relevant domain.</description><subject>Artificial neural networks</subject><subject>Emotion recognition</subject><subject>Emotions</subject><subject>Neural networks</subject><subject>Speech</subject><subject>Speech recognition</subject><issn>2158-107X</issn><issn>2156-5570</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNotkF9LwzAUxYMoOOa-gQ8Fnzvzp0mbx1KmTobCpuBbSNMb7VybmnSI396u3X0598DhXs4PoVuClyThQt6vn_Nily8ppniJCSGUsws0o4SLmPMUX457FhOcflyjRQh7PAyTVGRshra572tbm1ofohc4-lH6X-e_o1IHqKJV4_ratVFx0CGcknq0uq2iLRj32dajt9410a4DMF836MrqQ4DFWefo_WH1VjzFm9fHdZFvYsMo72OQADyjuEpMaYCCkJnIsLScMdA2TbkZmpSyFLgSppQ6AVnyygoCxFKqOZuju-lu593PEUKv9u7o2-GlooJzQkkqkiGVTCnjXQgerOp83Wj_pwhWI0A1AVQngOoMkP0DldxkLg</recordid><startdate>20201201</startdate><enddate>20201201</enddate><creator>Iqbal, Mudasser</creator><creator>Ali, Syed</creator><creator>Abid, Muhammad</creator><creator>Majeed, Furqan</creator><creator>Ali, Ans</creator><general>Science and Information (SAI) Organization Limited</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7XB</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope></search><sort><creationdate>20201201</creationdate><title>Artificial Neural Network based Emotion Classification and Recognition from Speech</title><author>Iqbal, Mudasser ; Ali, Syed ; Abid, Muhammad ; Majeed, Furqan ; Ali, Ans</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c325t-e9ee5820d4cbce2e6986809f533eaf775c125b9b60d6cb9a4e9b5df61e1f22a53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Artificial neural networks</topic><topic>Emotion recognition</topic><topic>Emotions</topic><topic>Neural networks</topic><topic>Speech</topic><topic>Speech recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Iqbal, Mudasser</creatorcontrib><creatorcontrib>Ali, Syed</creatorcontrib><creatorcontrib>Abid, Muhammad</creatorcontrib><creatorcontrib>Majeed, Furqan</creatorcontrib><creatorcontrib>Ali, Ans</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Database‎ (1962 - current)</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer science database</collection><collection>ProQuest research library</collection><collection>Research Library (Corporate)</collection><collection>ProQuest advanced technologies &amp; aerospace journals</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of advanced computer science &amp; applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Iqbal, Mudasser</au><au>Ali, Syed</au><au>Abid, Muhammad</au><au>Majeed, Furqan</au><au>Ali, Ans</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Artificial Neural Network based Emotion Classification and Recognition from Speech</atitle><jtitle>International journal of advanced computer science &amp; applications</jtitle><date>2020-12-01</date><risdate>2020</risdate><volume>11</volume><issue>12</issue><issn>2158-107X</issn><eissn>2156-5570</eissn><abstract>Emotion recognition from speech signals is still a challenging task. Hence, proposing an efficient and accurate technique for speech-based emotion recognition is also an important task. This study is focused on four basic human emotions (sad, angry, happy, and normal) recognition using an artificial neural network that can be detected through vocal expressions resulting in more efficient and productive machine behaviors. An effective model based on a Bayesian regularized artificial neural network (BRANN) is proposed in this study for speech-based emotion recognition. The experiments are conducted on a well-known Berlin database having 1470 speech samples carrying basic emotions with 500 samples of angry emotions, 300 samples of happy emotions, 350 samples of a neutral state, and 320 samples of sad emotions. The four features Frequency, Pitch, Amplitude, and formant of speech is used to recognize four basic emotions from speech. The performance of the proposed methodology is compared with the performance of state-of-the-art methodologies used for emotion recognition from speech. The proposed methodology achieved 95% accuracy of emotion recognition which is highest as compared to other states of the art techniques in the relevant domain.</abstract><cop>West Yorkshire</cop><pub>Science and Information (SAI) Organization Limited</pub><doi>10.14569/IJACSA.2020.0111253</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2158-107X
ispartof International journal of advanced computer science & applications, 2020-12, Vol.11 (12)
issn 2158-107X
2156-5570
language eng
recordid cdi_proquest_journals_2655121764
source EZB Electronic Journals Library
subjects Artificial neural networks
Emotion recognition
Emotions
Neural networks
Speech
Speech recognition
title Artificial Neural Network based Emotion Classification and Recognition from Speech
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T16%3A02%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Artificial%20Neural%20Network%20based%20Emotion%20Classification%20and%20Recognition%20from%20Speech&rft.jtitle=International%20journal%20of%20advanced%20computer%20science%20&%20applications&rft.au=Iqbal,%20Mudasser&rft.date=2020-12-01&rft.volume=11&rft.issue=12&rft.issn=2158-107X&rft.eissn=2156-5570&rft_id=info:doi/10.14569/IJACSA.2020.0111253&rft_dat=%3Cproquest_cross%3E2655121764%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2655121764&rft_id=info:pmid/&rfr_iscdi=true