A two-channel speech emotion recognition model based on raw stacked waveform
To improve the accuracy and efficiency of speech emotion recognition (SER), the acoustic feature set and speech emotion recognition model was designed based on the original speech signal, and explored the nonlinear relationship between acoustic features, the speech emotion recognition model, and the...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2022-03, Vol.81 (8), p.11537-11562 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 11562 |
---|---|
container_issue | 8 |
container_start_page | 11537 |
container_title | Multimedia tools and applications |
container_volume | 81 |
creator | Zheng, Chunjun Wang, Chunli Jia, Ning |
description | To improve the accuracy and efficiency of speech emotion recognition (SER), the acoustic feature set and speech emotion recognition model was designed based on the original speech signal, and explored the nonlinear relationship between acoustic features, the speech emotion recognition model, and the recognition task. Moreover, the original features of speech signals were studied rather than the traditional statistical features. A joint two-channel model was proposed based on the raw stacked waveform. To model raw waveform features, the convolutional recurrent neural network (CRNN) and bi-directional long short-term memory (BiLSTM) were introduced. An attention mechanism was integrated into the model to ensure that a single channel could learn the expression of the salient local region and global emotion features. Through these channels, the perception ability of speech acoustic features in multi-scale is improved, and the internal correlation between salient region and convolutional neural network is explored. The time domain and frequency domain features of speech are prominent, and the local expression of emotion is emphasized. Based on the preprocessing strategy of background separation and dimension unification, the convolutional recurrent neural network is used to extract global information. The proposed joint model could effectively integrate the advantages of the two channels. Several comparative experiments were conducted on the Interactive Emotional Dyadic Motion Capture (IEMOCAP) database. The experiments results showed that the proposed two-channel SER model could improve recognition accuracy (UA) by 5.1% and the convergence period was shortened by 58%, compared with the popular models. Furthermore, it performed best in solving data skew and improving efficiency, which proved the importance of having features and models based on the raw waveform. |
doi_str_mv | 10.1007/s11042-022-12378-1 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2644599912</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2644599912</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-c5d30a5488d1c6f01c0da0f76760d2b9dca492fda893a4d86086ea796d8d0bfd3</originalsourceid><addsrcrecordid>eNp9kEtLAzEUhYMoWKt_wNWA6-i9yUwey1J8QcGNrkOaZPqwM6nJaPHfm7aCO1f3XO4558JHyDXCLQLIu4wINaPAGEXGpaJ4QkbYSE6lZHhaNFdAZQN4Ti5yXgOgaFg9IrNJNewidUvb92FT5W0IblmFLg6r2FcpuLjoVwfdRV8Mc5uDr_Ynu6vyYN17WXf2K7QxdZfkrLWbHK5-55i8Pdy_Tp_o7OXxeTqZUcdRD9Q1noNtaqU8OtECOvAWWimkAM_m2jtba9Z6qzS3tVcClAhWauGVh3nr-ZjcHHu3KX58hjyYdfxMfXlpmKjrRmtdKIwJO7pcijmn0JptWnU2fRsEs6dmjtRMoWYO1AyWED-GcjH3i5D-qv9J_QAfBG_U</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2644599912</pqid></control><display><type>article</type><title>A two-channel speech emotion recognition model based on raw stacked waveform</title><source>SpringerLink Journals</source><creator>Zheng, Chunjun ; Wang, Chunli ; Jia, Ning</creator><creatorcontrib>Zheng, Chunjun ; Wang, Chunli ; Jia, Ning</creatorcontrib><description>To improve the accuracy and efficiency of speech emotion recognition (SER), the acoustic feature set and speech emotion recognition model was designed based on the original speech signal, and explored the nonlinear relationship between acoustic features, the speech emotion recognition model, and the recognition task. Moreover, the original features of speech signals were studied rather than the traditional statistical features. A joint two-channel model was proposed based on the raw stacked waveform. To model raw waveform features, the convolutional recurrent neural network (CRNN) and bi-directional long short-term memory (BiLSTM) were introduced. An attention mechanism was integrated into the model to ensure that a single channel could learn the expression of the salient local region and global emotion features. Through these channels, the perception ability of speech acoustic features in multi-scale is improved, and the internal correlation between salient region and convolutional neural network is explored. The time domain and frequency domain features of speech are prominent, and the local expression of emotion is emphasized. Based on the preprocessing strategy of background separation and dimension unification, the convolutional recurrent neural network is used to extract global information. The proposed joint model could effectively integrate the advantages of the two channels. Several comparative experiments were conducted on the Interactive Emotional Dyadic Motion Capture (IEMOCAP) database. The experiments results showed that the proposed two-channel SER model could improve recognition accuracy (UA) by 5.1% and the convergence period was shortened by 58%, compared with the popular models. Furthermore, it performed best in solving data skew and improving efficiency, which proved the importance of having features and models based on the raw waveform.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-022-12378-1</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Acoustics ; Artificial neural networks ; Channels ; Computer Communication Networks ; Computer Science ; Data Structures and Information Theory ; Emotion recognition ; Emotions ; Motion capture ; Motion perception ; Multimedia Information Systems ; Neural networks ; Recurrent neural networks ; Special Purpose and Application-Based Systems ; Speech ; Speech recognition ; Waveforms</subject><ispartof>Multimedia tools and applications, 2022-03, Vol.81 (8), p.11537-11562</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-c5d30a5488d1c6f01c0da0f76760d2b9dca492fda893a4d86086ea796d8d0bfd3</citedby><cites>FETCH-LOGICAL-c319t-c5d30a5488d1c6f01c0da0f76760d2b9dca492fda893a4d86086ea796d8d0bfd3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-022-12378-1$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-022-12378-1$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Zheng, Chunjun</creatorcontrib><creatorcontrib>Wang, Chunli</creatorcontrib><creatorcontrib>Jia, Ning</creatorcontrib><title>A two-channel speech emotion recognition model based on raw stacked waveform</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>To improve the accuracy and efficiency of speech emotion recognition (SER), the acoustic feature set and speech emotion recognition model was designed based on the original speech signal, and explored the nonlinear relationship between acoustic features, the speech emotion recognition model, and the recognition task. Moreover, the original features of speech signals were studied rather than the traditional statistical features. A joint two-channel model was proposed based on the raw stacked waveform. To model raw waveform features, the convolutional recurrent neural network (CRNN) and bi-directional long short-term memory (BiLSTM) were introduced. An attention mechanism was integrated into the model to ensure that a single channel could learn the expression of the salient local region and global emotion features. Through these channels, the perception ability of speech acoustic features in multi-scale is improved, and the internal correlation between salient region and convolutional neural network is explored. The time domain and frequency domain features of speech are prominent, and the local expression of emotion is emphasized. Based on the preprocessing strategy of background separation and dimension unification, the convolutional recurrent neural network is used to extract global information. The proposed joint model could effectively integrate the advantages of the two channels. Several comparative experiments were conducted on the Interactive Emotional Dyadic Motion Capture (IEMOCAP) database. The experiments results showed that the proposed two-channel SER model could improve recognition accuracy (UA) by 5.1% and the convergence period was shortened by 58%, compared with the popular models. Furthermore, it performed best in solving data skew and improving efficiency, which proved the importance of having features and models based on the raw waveform.</description><subject>Acoustics</subject><subject>Artificial neural networks</subject><subject>Channels</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Data Structures and Information Theory</subject><subject>Emotion recognition</subject><subject>Emotions</subject><subject>Motion capture</subject><subject>Motion perception</subject><subject>Multimedia Information Systems</subject><subject>Neural networks</subject><subject>Recurrent neural networks</subject><subject>Special Purpose and Application-Based Systems</subject><subject>Speech</subject><subject>Speech recognition</subject><subject>Waveforms</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>BENPR</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp9kEtLAzEUhYMoWKt_wNWA6-i9yUwey1J8QcGNrkOaZPqwM6nJaPHfm7aCO1f3XO4558JHyDXCLQLIu4wINaPAGEXGpaJ4QkbYSE6lZHhaNFdAZQN4Ti5yXgOgaFg9IrNJNewidUvb92FT5W0IblmFLg6r2FcpuLjoVwfdRV8Mc5uDr_Ynu6vyYN17WXf2K7QxdZfkrLWbHK5-55i8Pdy_Tp_o7OXxeTqZUcdRD9Q1noNtaqU8OtECOvAWWimkAM_m2jtba9Z6qzS3tVcClAhWauGVh3nr-ZjcHHu3KX58hjyYdfxMfXlpmKjrRmtdKIwJO7pcijmn0JptWnU2fRsEs6dmjtRMoWYO1AyWED-GcjH3i5D-qv9J_QAfBG_U</recordid><startdate>20220301</startdate><enddate>20220301</enddate><creator>Zheng, Chunjun</creator><creator>Wang, Chunli</creator><creator>Jia, Ning</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope></search><sort><creationdate>20220301</creationdate><title>A two-channel speech emotion recognition model based on raw stacked waveform</title><author>Zheng, Chunjun ; Wang, Chunli ; Jia, Ning</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-c5d30a5488d1c6f01c0da0f76760d2b9dca492fda893a4d86086ea796d8d0bfd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Acoustics</topic><topic>Artificial neural networks</topic><topic>Channels</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Data Structures and Information Theory</topic><topic>Emotion recognition</topic><topic>Emotions</topic><topic>Motion capture</topic><topic>Motion perception</topic><topic>Multimedia Information Systems</topic><topic>Neural networks</topic><topic>Recurrent neural networks</topic><topic>Special Purpose and Application-Based Systems</topic><topic>Speech</topic><topic>Speech recognition</topic><topic>Waveforms</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zheng, Chunjun</creatorcontrib><creatorcontrib>Wang, Chunli</creatorcontrib><creatorcontrib>Jia, Ning</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>ProQuest Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zheng, Chunjun</au><au>Wang, Chunli</au><au>Jia, Ning</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A two-channel speech emotion recognition model based on raw stacked waveform</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2022-03-01</date><risdate>2022</risdate><volume>81</volume><issue>8</issue><spage>11537</spage><epage>11562</epage><pages>11537-11562</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>To improve the accuracy and efficiency of speech emotion recognition (SER), the acoustic feature set and speech emotion recognition model was designed based on the original speech signal, and explored the nonlinear relationship between acoustic features, the speech emotion recognition model, and the recognition task. Moreover, the original features of speech signals were studied rather than the traditional statistical features. A joint two-channel model was proposed based on the raw stacked waveform. To model raw waveform features, the convolutional recurrent neural network (CRNN) and bi-directional long short-term memory (BiLSTM) were introduced. An attention mechanism was integrated into the model to ensure that a single channel could learn the expression of the salient local region and global emotion features. Through these channels, the perception ability of speech acoustic features in multi-scale is improved, and the internal correlation between salient region and convolutional neural network is explored. The time domain and frequency domain features of speech are prominent, and the local expression of emotion is emphasized. Based on the preprocessing strategy of background separation and dimension unification, the convolutional recurrent neural network is used to extract global information. The proposed joint model could effectively integrate the advantages of the two channels. Several comparative experiments were conducted on the Interactive Emotional Dyadic Motion Capture (IEMOCAP) database. The experiments results showed that the proposed two-channel SER model could improve recognition accuracy (UA) by 5.1% and the convergence period was shortened by 58%, compared with the popular models. Furthermore, it performed best in solving data skew and improving efficiency, which proved the importance of having features and models based on the raw waveform.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-022-12378-1</doi><tpages>26</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1380-7501 |
ispartof | Multimedia tools and applications, 2022-03, Vol.81 (8), p.11537-11562 |
issn | 1380-7501 1573-7721 |
language | eng |
recordid | cdi_proquest_journals_2644599912 |
source | SpringerLink Journals |
subjects | Acoustics Artificial neural networks Channels Computer Communication Networks Computer Science Data Structures and Information Theory Emotion recognition Emotions Motion capture Motion perception Multimedia Information Systems Neural networks Recurrent neural networks Special Purpose and Application-Based Systems Speech Speech recognition Waveforms |
title | A two-channel speech emotion recognition model based on raw stacked waveform |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T03%3A54%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20two-channel%20speech%20emotion%20recognition%20model%20based%20on%20raw%20stacked%20waveform&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Zheng,%20Chunjun&rft.date=2022-03-01&rft.volume=81&rft.issue=8&rft.spage=11537&rft.epage=11562&rft.pages=11537-11562&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-022-12378-1&rft_dat=%3Cproquest_cross%3E2644599912%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2644599912&rft_id=info:pmid/&rfr_iscdi=true |