Optimal trained ensemble of classification model for speech emotion recognition: Considering cross-lingual and multi-lingual scenarios

Speech has a significant role in conveying emotional information, and SER has emerged as a crucial component of the human–computer interface that has high real-time and accuracy needs. This paper proposes a novel Improved Coot optimization-based Ensemble Classification (ICO-EC) for SER that follows...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2024-05, Vol.83 (18), p.54331-54365
Hauptverfasser: Kawade, Rupali Ramdas, Jagtap, Sonal K.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 54365
container_issue 18
container_start_page 54331
container_title Multimedia tools and applications
container_volume 83
creator Kawade, Rupali Ramdas
Jagtap, Sonal K.
description Speech has a significant role in conveying emotional information, and SER has emerged as a crucial component of the human–computer interface that has high real-time and accuracy needs. This paper proposes a novel Improved Coot optimization-based Ensemble Classification (ICO-EC) for SER that follows three stages: preprocessing, feature extraction, and classification. The model starts with the preprocessing step, where the class imbalance problem is resolved using Improved SMOTE-ENC. Subsequently, in the feature extraction stage, IMFCC-based features, Chroma-based features, ZCR-based features, and spectral roll-off-based features are extracted. The last stage is classification; in this, an ensemble classification model is used, which combines the classifiers including Deep Maxout, LSTM and ICNN, respectively. Here, the training process is made optimal via an Improved Coot Optimization (ICO) by tuning the optimal weights. At last, the performances of the developed model are validated with conventional methods with four different databases. Also, the proposed model for cross-lingual provides a better accuracy as 92.76% for Hindi, 92.95% for Kannada, 93.85% for Telugu, and 95.97% for Urdu, respectively. The ICO-CE model outperformed 93% accuracy in the Hindi dataset over other models.
doi_str_mv 10.1007/s11042-023-17097-9
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3055249739</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3055249739</sourcerecordid><originalsourceid>FETCH-LOGICAL-c270t-ae7e7b3b96d1da664e48780317ee01899bad3922a15c2b705b72a467477e941b3</originalsourceid><addsrcrecordid>eNp9kMtOwzAQRS0EEqXwA6wssQ74kWRqdqjiJVXqBtaWk0yKq8QOdrLgB_hukgYBK1ZzNTN3HoeQS86uOWNwEzlnqUiYkAkHpiBRR2TBM5AJgODHf_QpOYtxzxjPM5EuyOe2621rGtoHYx1WFF3EtmiQ-pqWjYnR1rY0vfWOtr7ChtY-0Nghlm8UW38oBCz9ztlJ39K1d9FWGKzb0TL4GJNmlMO4wriKtkPT259MLNGZYH08Jye1aSJefMcleX24f1k_JZvt4_P6bpOUAlifGASEQhYqr3hl8jzFdAUrJjkgMr5SqjCVVEIYnpWiAJYVIEyaQwqAKuWFXJKreW4X_PuAsdd7PwQ3rtSSZSMRBVKNXWLuOtwfsNZdGCGFD82ZnnjrmbceeesDbz2Z5GyK3fQ7ht_R_7i-AMjVhak</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3055249739</pqid></control><display><type>article</type><title>Optimal trained ensemble of classification model for speech emotion recognition: Considering cross-lingual and multi-lingual scenarios</title><source>SpringerLink Journals - AutoHoldings</source><creator>Kawade, Rupali Ramdas ; Jagtap, Sonal K.</creator><creatorcontrib>Kawade, Rupali Ramdas ; Jagtap, Sonal K.</creatorcontrib><description>Speech has a significant role in conveying emotional information, and SER has emerged as a crucial component of the human–computer interface that has high real-time and accuracy needs. This paper proposes a novel Improved Coot optimization-based Ensemble Classification (ICO-EC) for SER that follows three stages: preprocessing, feature extraction, and classification. The model starts with the preprocessing step, where the class imbalance problem is resolved using Improved SMOTE-ENC. Subsequently, in the feature extraction stage, IMFCC-based features, Chroma-based features, ZCR-based features, and spectral roll-off-based features are extracted. The last stage is classification; in this, an ensemble classification model is used, which combines the classifiers including Deep Maxout, LSTM and ICNN, respectively. Here, the training process is made optimal via an Improved Coot Optimization (ICO) by tuning the optimal weights. At last, the performances of the developed model are validated with conventional methods with four different databases. Also, the proposed model for cross-lingual provides a better accuracy as 92.76% for Hindi, 92.95% for Kannada, 93.85% for Telugu, and 95.97% for Urdu, respectively. The ICO-CE model outperformed 93% accuracy in the Hindi dataset over other models.</description><identifier>ISSN: 1573-7721</identifier><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-023-17097-9</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Accuracy ; Classification ; Computer Communication Networks ; Computer Science ; Data Structures and Information Theory ; Emotion recognition ; Feature extraction ; Human-computer interface ; Multimedia Information Systems ; Optimization ; Preprocessing ; Special Purpose and Application-Based Systems ; Speech recognition</subject><ispartof>Multimedia tools and applications, 2024-05, Vol.83 (18), p.54331-54365</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c270t-ae7e7b3b96d1da664e48780317ee01899bad3922a15c2b705b72a467477e941b3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-023-17097-9$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-023-17097-9$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27922,27923,41486,42555,51317</link.rule.ids></links><search><creatorcontrib>Kawade, Rupali Ramdas</creatorcontrib><creatorcontrib>Jagtap, Sonal K.</creatorcontrib><title>Optimal trained ensemble of classification model for speech emotion recognition: Considering cross-lingual and multi-lingual scenarios</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>Speech has a significant role in conveying emotional information, and SER has emerged as a crucial component of the human–computer interface that has high real-time and accuracy needs. This paper proposes a novel Improved Coot optimization-based Ensemble Classification (ICO-EC) for SER that follows three stages: preprocessing, feature extraction, and classification. The model starts with the preprocessing step, where the class imbalance problem is resolved using Improved SMOTE-ENC. Subsequently, in the feature extraction stage, IMFCC-based features, Chroma-based features, ZCR-based features, and spectral roll-off-based features are extracted. The last stage is classification; in this, an ensemble classification model is used, which combines the classifiers including Deep Maxout, LSTM and ICNN, respectively. Here, the training process is made optimal via an Improved Coot Optimization (ICO) by tuning the optimal weights. At last, the performances of the developed model are validated with conventional methods with four different databases. Also, the proposed model for cross-lingual provides a better accuracy as 92.76% for Hindi, 92.95% for Kannada, 93.85% for Telugu, and 95.97% for Urdu, respectively. The ICO-CE model outperformed 93% accuracy in the Hindi dataset over other models.</description><subject>Accuracy</subject><subject>Classification</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Data Structures and Information Theory</subject><subject>Emotion recognition</subject><subject>Feature extraction</subject><subject>Human-computer interface</subject><subject>Multimedia Information Systems</subject><subject>Optimization</subject><subject>Preprocessing</subject><subject>Special Purpose and Application-Based Systems</subject><subject>Speech recognition</subject><issn>1573-7721</issn><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kMtOwzAQRS0EEqXwA6wssQ74kWRqdqjiJVXqBtaWk0yKq8QOdrLgB_hukgYBK1ZzNTN3HoeQS86uOWNwEzlnqUiYkAkHpiBRR2TBM5AJgODHf_QpOYtxzxjPM5EuyOe2621rGtoHYx1WFF3EtmiQ-pqWjYnR1rY0vfWOtr7ChtY-0Nghlm8UW38oBCz9ztlJ39K1d9FWGKzb0TL4GJNmlMO4wriKtkPT259MLNGZYH08Jye1aSJefMcleX24f1k_JZvt4_P6bpOUAlifGASEQhYqr3hl8jzFdAUrJjkgMr5SqjCVVEIYnpWiAJYVIEyaQwqAKuWFXJKreW4X_PuAsdd7PwQ3rtSSZSMRBVKNXWLuOtwfsNZdGCGFD82ZnnjrmbceeesDbz2Z5GyK3fQ7ht_R_7i-AMjVhak</recordid><startdate>20240501</startdate><enddate>20240501</enddate><creator>Kawade, Rupali Ramdas</creator><creator>Jagtap, Sonal K.</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20240501</creationdate><title>Optimal trained ensemble of classification model for speech emotion recognition: Considering cross-lingual and multi-lingual scenarios</title><author>Kawade, Rupali Ramdas ; Jagtap, Sonal K.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c270t-ae7e7b3b96d1da664e48780317ee01899bad3922a15c2b705b72a467477e941b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Classification</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Data Structures and Information Theory</topic><topic>Emotion recognition</topic><topic>Feature extraction</topic><topic>Human-computer interface</topic><topic>Multimedia Information Systems</topic><topic>Optimization</topic><topic>Preprocessing</topic><topic>Special Purpose and Application-Based Systems</topic><topic>Speech recognition</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kawade, Rupali Ramdas</creatorcontrib><creatorcontrib>Jagtap, Sonal K.</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kawade, Rupali Ramdas</au><au>Jagtap, Sonal K.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Optimal trained ensemble of classification model for speech emotion recognition: Considering cross-lingual and multi-lingual scenarios</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2024-05-01</date><risdate>2024</risdate><volume>83</volume><issue>18</issue><spage>54331</spage><epage>54365</epage><pages>54331-54365</pages><issn>1573-7721</issn><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>Speech has a significant role in conveying emotional information, and SER has emerged as a crucial component of the human–computer interface that has high real-time and accuracy needs. This paper proposes a novel Improved Coot optimization-based Ensemble Classification (ICO-EC) for SER that follows three stages: preprocessing, feature extraction, and classification. The model starts with the preprocessing step, where the class imbalance problem is resolved using Improved SMOTE-ENC. Subsequently, in the feature extraction stage, IMFCC-based features, Chroma-based features, ZCR-based features, and spectral roll-off-based features are extracted. The last stage is classification; in this, an ensemble classification model is used, which combines the classifiers including Deep Maxout, LSTM and ICNN, respectively. Here, the training process is made optimal via an Improved Coot Optimization (ICO) by tuning the optimal weights. At last, the performances of the developed model are validated with conventional methods with four different databases. Also, the proposed model for cross-lingual provides a better accuracy as 92.76% for Hindi, 92.95% for Kannada, 93.85% for Telugu, and 95.97% for Urdu, respectively. The ICO-CE model outperformed 93% accuracy in the Hindi dataset over other models.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-023-17097-9</doi><tpages>35</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1573-7721
ispartof Multimedia tools and applications, 2024-05, Vol.83 (18), p.54331-54365
issn 1573-7721
1380-7501
1573-7721
language eng
recordid cdi_proquest_journals_3055249739
source SpringerLink Journals - AutoHoldings
subjects Accuracy
Classification
Computer Communication Networks
Computer Science
Data Structures and Information Theory
Emotion recognition
Feature extraction
Human-computer interface
Multimedia Information Systems
Optimization
Preprocessing
Special Purpose and Application-Based Systems
Speech recognition
title Optimal trained ensemble of classification model for speech emotion recognition: Considering cross-lingual and multi-lingual scenarios
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T06%3A23%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Optimal%20trained%20ensemble%20of%20classification%20model%20for%20speech%20emotion%20recognition:%20Considering%20cross-lingual%20and%20multi-lingual%20scenarios&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Kawade,%20Rupali%20Ramdas&rft.date=2024-05-01&rft.volume=83&rft.issue=18&rft.spage=54331&rft.epage=54365&rft.pages=54331-54365&rft.issn=1573-7721&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-023-17097-9&rft_dat=%3Cproquest_cross%3E3055249739%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3055249739&rft_id=info:pmid/&rfr_iscdi=true