Emotion recognition from syllabic units using k-nearest-neighbor classification and energy distribution

In this article, we present an automatic technique for recognizing emotional states from speech signals. The main focus of this paper is to present an efficient and reduced set of acoustic features that allows us to recognize the four basic human emotions (anger, sadness, joy, and neutral). The prop...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of electrical and computer engineering (Malacca, Malacca) Malacca), 2021-12, Vol.11 (6), p.5438
Hauptverfasser: Agrima, Abdellah, Mounir, Ilham, Farchi, Abdelmajid, Elmaazouzi, Laila, Mounir, Badia
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 6
container_start_page 5438
container_title International journal of electrical and computer engineering (Malacca, Malacca)
container_volume 11
creator Agrima, Abdellah
Mounir, Ilham
Farchi, Abdelmajid
Elmaazouzi, Laila
Mounir, Badia
description In this article, we present an automatic technique for recognizing emotional states from speech signals. The main focus of this paper is to present an efficient and reduced set of acoustic features that allows us to recognize the four basic human emotions (anger, sadness, joy, and neutral). The proposed features vector is composed by twenty-eight measurements corresponding to standard acoustic features such as formants, fundamental frequency (obtained by Praat software) as well as introducing new features based on the calculation of the energies in some specific frequency bands and their distributions (thanks to MATLAB codes). The extracted measurements are obtained from syllabic units’ consonant/vowel (CV) derived from Moroccan Arabic dialect emotional database (MADED) corpus. Thereafter, the data which has been collected is then trained by a k-nearest-neighbor (KNN) classifier to perform the automated recognition phase. The results reach 64.65% in the multi-class classification and 94.95% for classification between positive and negative emotions.
doi_str_mv 10.11591/ijece.v11i6.pp5438-5449
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2661969472</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2661969472</sourcerecordid><originalsourceid>FETCH-LOGICAL-c202t-6c1817626f08020502d807530760a826d5879cc0a4540159adee2a39787436213</originalsourceid><addsrcrecordid>eNotkEtPwzAQhC0EElXpf7DEOWXt-JUjqspDQuICZ8txnOCSxsFOkPrvcVNOsxrtzo4-hDCBLSG8Ig_-4Kzb_hLixXYcOStVwRmrrtCKSkoLyqW6zjMoVSgJ6hZtUvI1MCYZSMFXqNsfw-TDgKOzoRv8MrcxHHE69b2pvcVzdhOekx86_F0MzkSXpqy--6pDxLY3ObP11iy3ZmiwG1zsTrjxaYq-ns_-HbppTZ_c5l_X6PNp_7F7Kd7en193j2-FpUCnQliiiBRUtKCAAgfaKJC8zGXBKCoarmRlLRjGGWQCpnGOmrKSSrJSUFKu0f0ld4zhZ85F9SHMccgvNRWCVKJikuYtddmyMaQUXavH6I8mnjQBvZDVC1m9kNUXsvpMtvwD2wJvvw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2661969472</pqid></control><display><type>article</type><title>Emotion recognition from syllabic units using k-nearest-neighbor classification and energy distribution</title><source>EZB-FREE-00999 freely available EZB journals</source><creator>Agrima, Abdellah ; Mounir, Ilham ; Farchi, Abdelmajid ; Elmaazouzi, Laila ; Mounir, Badia</creator><creatorcontrib>Agrima, Abdellah ; Mounir, Ilham ; Farchi, Abdelmajid ; Elmaazouzi, Laila ; Mounir, Badia</creatorcontrib><description>In this article, we present an automatic technique for recognizing emotional states from speech signals. The main focus of this paper is to present an efficient and reduced set of acoustic features that allows us to recognize the four basic human emotions (anger, sadness, joy, and neutral). The proposed features vector is composed by twenty-eight measurements corresponding to standard acoustic features such as formants, fundamental frequency (obtained by Praat software) as well as introducing new features based on the calculation of the energies in some specific frequency bands and their distributions (thanks to MATLAB codes). The extracted measurements are obtained from syllabic units’ consonant/vowel (CV) derived from Moroccan Arabic dialect emotional database (MADED) corpus. Thereafter, the data which has been collected is then trained by a k-nearest-neighbor (KNN) classifier to perform the automated recognition phase. The results reach 64.65% in the multi-class classification and 94.95% for classification between positive and negative emotions.</description><identifier>ISSN: 2088-8708</identifier><identifier>EISSN: 2722-2578</identifier><identifier>EISSN: 2088-8708</identifier><identifier>DOI: 10.11591/ijece.v11i6.pp5438-5449</identifier><language>eng</language><publisher>Yogyakarta: IAES Institute of Advanced Engineering and Science</publisher><subject>Classification ; Consonants (speech) ; Emotion recognition ; Emotional factors ; Emotions ; Energy distribution ; Resonant frequencies ; Speech recognition</subject><ispartof>International journal of electrical and computer engineering (Malacca, Malacca), 2021-12, Vol.11 (6), p.5438</ispartof><rights>Copyright IAES Institute of Advanced Engineering and Science Dec 2021</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c202t-6c1817626f08020502d807530760a826d5879cc0a4540159adee2a39787436213</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Agrima, Abdellah</creatorcontrib><creatorcontrib>Mounir, Ilham</creatorcontrib><creatorcontrib>Farchi, Abdelmajid</creatorcontrib><creatorcontrib>Elmaazouzi, Laila</creatorcontrib><creatorcontrib>Mounir, Badia</creatorcontrib><title>Emotion recognition from syllabic units using k-nearest-neighbor classification and energy distribution</title><title>International journal of electrical and computer engineering (Malacca, Malacca)</title><description>In this article, we present an automatic technique for recognizing emotional states from speech signals. The main focus of this paper is to present an efficient and reduced set of acoustic features that allows us to recognize the four basic human emotions (anger, sadness, joy, and neutral). The proposed features vector is composed by twenty-eight measurements corresponding to standard acoustic features such as formants, fundamental frequency (obtained by Praat software) as well as introducing new features based on the calculation of the energies in some specific frequency bands and their distributions (thanks to MATLAB codes). The extracted measurements are obtained from syllabic units’ consonant/vowel (CV) derived from Moroccan Arabic dialect emotional database (MADED) corpus. Thereafter, the data which has been collected is then trained by a k-nearest-neighbor (KNN) classifier to perform the automated recognition phase. The results reach 64.65% in the multi-class classification and 94.95% for classification between positive and negative emotions.</description><subject>Classification</subject><subject>Consonants (speech)</subject><subject>Emotion recognition</subject><subject>Emotional factors</subject><subject>Emotions</subject><subject>Energy distribution</subject><subject>Resonant frequencies</subject><subject>Speech recognition</subject><issn>2088-8708</issn><issn>2722-2578</issn><issn>2088-8708</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNotkEtPwzAQhC0EElXpf7DEOWXt-JUjqspDQuICZ8txnOCSxsFOkPrvcVNOsxrtzo4-hDCBLSG8Ig_-4Kzb_hLixXYcOStVwRmrrtCKSkoLyqW6zjMoVSgJ6hZtUvI1MCYZSMFXqNsfw-TDgKOzoRv8MrcxHHE69b2pvcVzdhOekx86_F0MzkSXpqy--6pDxLY3ObP11iy3ZmiwG1zsTrjxaYq-ns_-HbppTZ_c5l_X6PNp_7F7Kd7en193j2-FpUCnQliiiBRUtKCAAgfaKJC8zGXBKCoarmRlLRjGGWQCpnGOmrKSSrJSUFKu0f0ld4zhZ85F9SHMccgvNRWCVKJikuYtddmyMaQUXavH6I8mnjQBvZDVC1m9kNUXsvpMtvwD2wJvvw</recordid><startdate>20211201</startdate><enddate>20211201</enddate><creator>Agrima, Abdellah</creator><creator>Mounir, Ilham</creator><creator>Farchi, Abdelmajid</creator><creator>Elmaazouzi, Laila</creator><creator>Mounir, Badia</creator><general>IAES Institute of Advanced Engineering and Science</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BVBZV</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L6V</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20211201</creationdate><title>Emotion recognition from syllabic units using k-nearest-neighbor classification and energy distribution</title><author>Agrima, Abdellah ; Mounir, Ilham ; Farchi, Abdelmajid ; Elmaazouzi, Laila ; Mounir, Badia</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c202t-6c1817626f08020502d807530760a826d5879cc0a4540159adee2a39787436213</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Classification</topic><topic>Consonants (speech)</topic><topic>Emotion recognition</topic><topic>Emotional factors</topic><topic>Emotions</topic><topic>Energy distribution</topic><topic>Resonant frequencies</topic><topic>Speech recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Agrima, Abdellah</creatorcontrib><creatorcontrib>Mounir, Ilham</creatorcontrib><creatorcontrib>Farchi, Abdelmajid</creatorcontrib><creatorcontrib>Elmaazouzi, Laila</creatorcontrib><creatorcontrib>Mounir, Badia</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>East &amp; South Asia Database</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><jtitle>International journal of electrical and computer engineering (Malacca, Malacca)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Agrima, Abdellah</au><au>Mounir, Ilham</au><au>Farchi, Abdelmajid</au><au>Elmaazouzi, Laila</au><au>Mounir, Badia</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Emotion recognition from syllabic units using k-nearest-neighbor classification and energy distribution</atitle><jtitle>International journal of electrical and computer engineering (Malacca, Malacca)</jtitle><date>2021-12-01</date><risdate>2021</risdate><volume>11</volume><issue>6</issue><spage>5438</spage><pages>5438-</pages><issn>2088-8708</issn><eissn>2722-2578</eissn><eissn>2088-8708</eissn><abstract>In this article, we present an automatic technique for recognizing emotional states from speech signals. The main focus of this paper is to present an efficient and reduced set of acoustic features that allows us to recognize the four basic human emotions (anger, sadness, joy, and neutral). The proposed features vector is composed by twenty-eight measurements corresponding to standard acoustic features such as formants, fundamental frequency (obtained by Praat software) as well as introducing new features based on the calculation of the energies in some specific frequency bands and their distributions (thanks to MATLAB codes). The extracted measurements are obtained from syllabic units’ consonant/vowel (CV) derived from Moroccan Arabic dialect emotional database (MADED) corpus. Thereafter, the data which has been collected is then trained by a k-nearest-neighbor (KNN) classifier to perform the automated recognition phase. The results reach 64.65% in the multi-class classification and 94.95% for classification between positive and negative emotions.</abstract><cop>Yogyakarta</cop><pub>IAES Institute of Advanced Engineering and Science</pub><doi>10.11591/ijece.v11i6.pp5438-5449</doi></addata></record>
fulltext fulltext
identifier ISSN: 2088-8708
ispartof International journal of electrical and computer engineering (Malacca, Malacca), 2021-12, Vol.11 (6), p.5438
issn 2088-8708
2722-2578
2088-8708
language eng
recordid cdi_proquest_journals_2661969472
source EZB-FREE-00999 freely available EZB journals
subjects Classification
Consonants (speech)
Emotion recognition
Emotional factors
Emotions
Energy distribution
Resonant frequencies
Speech recognition
title Emotion recognition from syllabic units using k-nearest-neighbor classification and energy distribution
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T04%3A36%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Emotion%20recognition%20from%20syllabic%20units%20using%20k-nearest-neighbor%20classification%20and%20energy%20distribution&rft.jtitle=International%20journal%20of%20electrical%20and%20computer%20engineering%20(Malacca,%20Malacca)&rft.au=Agrima,%20Abdellah&rft.date=2021-12-01&rft.volume=11&rft.issue=6&rft.spage=5438&rft.pages=5438-&rft.issn=2088-8708&rft.eissn=2722-2578&rft_id=info:doi/10.11591/ijece.v11i6.pp5438-5449&rft_dat=%3Cproquest_cross%3E2661969472%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2661969472&rft_id=info:pmid/&rfr_iscdi=true