Feature Analysis and Evaluation for Automatic Emotion Identification in Speech

The definition of parameters is a crucial step in the development of a system for identifying emotions in speech. Although there is no agreement on which are the best features for this task, it is generally accepted that prosody carries most of the emotional information. Most works in the field use...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on multimedia 2010-10, Vol.12 (6), p.490-501
Hauptverfasser: Luengo, I, Navas, E, Hernáez, Inmaculada
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 501
container_issue 6
container_start_page 490
container_title IEEE transactions on multimedia
container_volume 12
creator Luengo, I
Navas, E
Hernáez, Inmaculada
description The definition of parameters is a crucial step in the development of a system for identifying emotions in speech. Although there is no agreement on which are the best features for this task, it is generally accepted that prosody carries most of the emotional information. Most works in the field use some kind of prosodic features, often in combination with spectral and voice quality parametrizations. Nevertheless, no systematic study has been done comparing these features. This paper presents the analysis of the characteristics of features derived from prosody, spectral envelope, and voice quality as well as their capability to discriminate emotions. In addition, early fusion and late fusion techniques for combining different information sources are evaluated. The results of this analysis are validated with experimental automatic emotion identification tests. Results suggest that spectral envelope features outperform the prosodic ones. Even when different parametrizations are combined, the late fusion of long-term spectral statistics with short-term spectral envelope parameters provides an accuracy comparable to that obtained when all parametrizations are combined.
doi_str_mv 10.1109/TMM.2010.2051872
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_1029863940</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>5571817</ieee_id><sourcerecordid>861543075</sourcerecordid><originalsourceid>FETCH-LOGICAL-c323t-ca0be20916e9aaaa956b0461cf528eb9d96f5468d6f8556ff73202190bef68de3</originalsourceid><addsrcrecordid>eNpdkDFPwzAQhS0EEqWwI7FEYmBKOTuxHY9V1UIlCgNljlznLFwlcYkTpP57XFoxcMvdO33vpHuE3FKYUArqcb1aTRhExYDTQrIzMqIqpymAlOdx5gxSxShckqsQtgA05yBH5HWBuh86TKatrvfBhUS3VTL_1vWge-fbxPoumQ69b6I0ybzxv9tlhW3vrDNHyLXJ-w7RfF6TC6vrgDenPiYfi_l69py-vD0tZ9OX1GQs61OjYYMMFBWodCzFxQZyQY3lrMCNqpSwPBdFJWzBubBWZgwYVdFl4xazMXk43t11_mvA0JeNCwbrWrfoh1AWgvI8A8kjef-P3Pqhi8-GkgJThchUDpGCI2U6H0KHttx1rtHdPkLlId8y5lse8i1P-UbL3dHiEPEP51zSgsrsBx19dcY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1029863940</pqid></control><display><type>article</type><title>Feature Analysis and Evaluation for Automatic Emotion Identification in Speech</title><source>IEEE Electronic Library (IEL)</source><creator>Luengo, I ; Navas, E ; Hernáez, Inmaculada</creator><creatorcontrib>Luengo, I ; Navas, E ; Hernáez, Inmaculada</creatorcontrib><description>The definition of parameters is a crucial step in the development of a system for identifying emotions in speech. Although there is no agreement on which are the best features for this task, it is generally accepted that prosody carries most of the emotional information. Most works in the field use some kind of prosodic features, often in combination with spectral and voice quality parametrizations. Nevertheless, no systematic study has been done comparing these features. This paper presents the analysis of the characteristics of features derived from prosody, spectral envelope, and voice quality as well as their capability to discriminate emotions. In addition, early fusion and late fusion techniques for combining different information sources are evaluated. The results of this analysis are validated with experimental automatic emotion identification tests. Results suggest that spectral envelope features outperform the prosodic ones. Even when different parametrizations are combined, the late fusion of long-term spectral statistics with short-term spectral envelope parameters provides an accuracy comparable to that obtained when all parametrizations are combined.</description><identifier>ISSN: 1520-9210</identifier><identifier>EISSN: 1941-0077</identifier><identifier>DOI: 10.1109/TMM.2010.2051872</identifier><identifier>CODEN: ITMUF8</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Dispersion ; Emotion identification ; Emotions ; Envelopes ; Estimation ; Feature extraction ; information fusion ; Labeling ; Mel frequency cepstral coefficient ; Multimedia ; Parametrization ; Spectra ; Speech ; Statistics ; Voice</subject><ispartof>IEEE transactions on multimedia, 2010-10, Vol.12 (6), p.490-501</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Oct 2010</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c323t-ca0be20916e9aaaa956b0461cf528eb9d96f5468d6f8556ff73202190bef68de3</citedby><cites>FETCH-LOGICAL-c323t-ca0be20916e9aaaa956b0461cf528eb9d96f5468d6f8556ff73202190bef68de3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/5571817$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,777,781,793,27906,27907,54740</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/5571817$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Luengo, I</creatorcontrib><creatorcontrib>Navas, E</creatorcontrib><creatorcontrib>Hernáez, Inmaculada</creatorcontrib><title>Feature Analysis and Evaluation for Automatic Emotion Identification in Speech</title><title>IEEE transactions on multimedia</title><addtitle>TMM</addtitle><description>The definition of parameters is a crucial step in the development of a system for identifying emotions in speech. Although there is no agreement on which are the best features for this task, it is generally accepted that prosody carries most of the emotional information. Most works in the field use some kind of prosodic features, often in combination with spectral and voice quality parametrizations. Nevertheless, no systematic study has been done comparing these features. This paper presents the analysis of the characteristics of features derived from prosody, spectral envelope, and voice quality as well as their capability to discriminate emotions. In addition, early fusion and late fusion techniques for combining different information sources are evaluated. The results of this analysis are validated with experimental automatic emotion identification tests. Results suggest that spectral envelope features outperform the prosodic ones. Even when different parametrizations are combined, the late fusion of long-term spectral statistics with short-term spectral envelope parameters provides an accuracy comparable to that obtained when all parametrizations are combined.</description><subject>Dispersion</subject><subject>Emotion identification</subject><subject>Emotions</subject><subject>Envelopes</subject><subject>Estimation</subject><subject>Feature extraction</subject><subject>information fusion</subject><subject>Labeling</subject><subject>Mel frequency cepstral coefficient</subject><subject>Multimedia</subject><subject>Parametrization</subject><subject>Spectra</subject><subject>Speech</subject><subject>Statistics</subject><subject>Voice</subject><issn>1520-9210</issn><issn>1941-0077</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2010</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkDFPwzAQhS0EEqWwI7FEYmBKOTuxHY9V1UIlCgNljlznLFwlcYkTpP57XFoxcMvdO33vpHuE3FKYUArqcb1aTRhExYDTQrIzMqIqpymAlOdx5gxSxShckqsQtgA05yBH5HWBuh86TKatrvfBhUS3VTL_1vWge-fbxPoumQ69b6I0ybzxv9tlhW3vrDNHyLXJ-w7RfF6TC6vrgDenPiYfi_l69py-vD0tZ9OX1GQs61OjYYMMFBWodCzFxQZyQY3lrMCNqpSwPBdFJWzBubBWZgwYVdFl4xazMXk43t11_mvA0JeNCwbrWrfoh1AWgvI8A8kjef-P3Pqhi8-GkgJThchUDpGCI2U6H0KHttx1rtHdPkLlId8y5lse8i1P-UbL3dHiEPEP51zSgsrsBx19dcY</recordid><startdate>201010</startdate><enddate>201010</enddate><creator>Luengo, I</creator><creator>Navas, E</creator><creator>Hernáez, Inmaculada</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>F28</scope><scope>FR3</scope></search><sort><creationdate>201010</creationdate><title>Feature Analysis and Evaluation for Automatic Emotion Identification in Speech</title><author>Luengo, I ; Navas, E ; Hernáez, Inmaculada</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c323t-ca0be20916e9aaaa956b0461cf528eb9d96f5468d6f8556ff73202190bef68de3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2010</creationdate><topic>Dispersion</topic><topic>Emotion identification</topic><topic>Emotions</topic><topic>Envelopes</topic><topic>Estimation</topic><topic>Feature extraction</topic><topic>information fusion</topic><topic>Labeling</topic><topic>Mel frequency cepstral coefficient</topic><topic>Multimedia</topic><topic>Parametrization</topic><topic>Spectra</topic><topic>Speech</topic><topic>Statistics</topic><topic>Voice</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Luengo, I</creatorcontrib><creatorcontrib>Navas, E</creatorcontrib><creatorcontrib>Hernáez, Inmaculada</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><jtitle>IEEE transactions on multimedia</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Luengo, I</au><au>Navas, E</au><au>Hernáez, Inmaculada</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Feature Analysis and Evaluation for Automatic Emotion Identification in Speech</atitle><jtitle>IEEE transactions on multimedia</jtitle><stitle>TMM</stitle><date>2010-10</date><risdate>2010</risdate><volume>12</volume><issue>6</issue><spage>490</spage><epage>501</epage><pages>490-501</pages><issn>1520-9210</issn><eissn>1941-0077</eissn><coden>ITMUF8</coden><abstract>The definition of parameters is a crucial step in the development of a system for identifying emotions in speech. Although there is no agreement on which are the best features for this task, it is generally accepted that prosody carries most of the emotional information. Most works in the field use some kind of prosodic features, often in combination with spectral and voice quality parametrizations. Nevertheless, no systematic study has been done comparing these features. This paper presents the analysis of the characteristics of features derived from prosody, spectral envelope, and voice quality as well as their capability to discriminate emotions. In addition, early fusion and late fusion techniques for combining different information sources are evaluated. The results of this analysis are validated with experimental automatic emotion identification tests. Results suggest that spectral envelope features outperform the prosodic ones. Even when different parametrizations are combined, the late fusion of long-term spectral statistics with short-term spectral envelope parameters provides an accuracy comparable to that obtained when all parametrizations are combined.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TMM.2010.2051872</doi><tpages>12</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1520-9210
ispartof IEEE transactions on multimedia, 2010-10, Vol.12 (6), p.490-501
issn 1520-9210
1941-0077
language eng
recordid cdi_proquest_journals_1029863940
source IEEE Electronic Library (IEL)
subjects Dispersion
Emotion identification
Emotions
Envelopes
Estimation
Feature extraction
information fusion
Labeling
Mel frequency cepstral coefficient
Multimedia
Parametrization
Spectra
Speech
Statistics
Voice
title Feature Analysis and Evaluation for Automatic Emotion Identification in Speech
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T10%3A28%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Feature%20Analysis%20and%20Evaluation%20for%20Automatic%20Emotion%20Identification%20in%20Speech&rft.jtitle=IEEE%20transactions%20on%20multimedia&rft.au=Luengo,%20I&rft.date=2010-10&rft.volume=12&rft.issue=6&rft.spage=490&rft.epage=501&rft.pages=490-501&rft.issn=1520-9210&rft.eissn=1941-0077&rft.coden=ITMUF8&rft_id=info:doi/10.1109/TMM.2010.2051872&rft_dat=%3Cproquest_RIE%3E861543075%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1029863940&rft_id=info:pmid/&rft_ieee_id=5571817&rfr_iscdi=true