Prosodic and Segmental Rubrics in Emotion Identification

It is well known that the emotional state of a speaker usually alters the way she/he speaks. Although all the components of the voice can be affected by emotion in some statistically-significant way, not all these deviations from a neutral voice are identified by human listeners as conveying emotion...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Barra, R., Montero, J.M., Macias-Guarasa, J., D'Haro, L.F., San-Segundo, R., Cordoba, R.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page I
container_issue
container_start_page I
container_title
container_volume 1
creator Barra, R.
Montero, J.M.
Macias-Guarasa, J.
D'Haro, L.F.
San-Segundo, R.
Cordoba, R.
description It is well known that the emotional state of a speaker usually alters the way she/he speaks. Although all the components of the voice can be affected by emotion in some statistically-significant way, not all these deviations from a neutral voice are identified by human listeners as conveying emotional information. In this paper we have carried out several perceptual and objective experiments that show the relevance of prosody and segmental spectrum in the characterization and identification of four emotions in Spanish. A Bayes classifier has been used in the objective emotion identification task. Emotion models were generated as the contribution of every emotion to the build-up of a universal background emotion codebook. According to our experiments, surprise is primarily identified by humans through its prosodic rubric (in spite of some automatically-identifiable segmental characteristics); while for anger the situation is just the opposite. Sadness and happiness need a combination of prosodic and segmental rubrics to be reliably identified
doi_str_mv 10.1109/ICASSP.2006.1660213
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_1660213</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>1660213</ieee_id><sourcerecordid>1660213</sourcerecordid><originalsourceid>FETCH-LOGICAL-i653-e68cc7212ca7c1b870c5422061ec9e7a6e052cd2be62c6b53a6f301df9be1f8a3</originalsourceid><addsrcrecordid>eNotj81qwzAQhEV_oCb1E-SiF7C7u5JX0rGEtA0EGuoceguyLBeV2C52eujbN6U5DcMHwzdCLBFKRHAPm9VjXe9KAuASmYFQXYmMlHEFOni_FrkzFjVpDZqdvREZVgQFo3Z3Ip_nTwBAx0YryoTdTeM8tilIP7Syjh99HE7-KN--mymFWaZBrvvxlMZBbtozSl0K_q_ei9vOH-eYX3Ih9k_r_eql2L4-nw23ReJKFZFtCIaQgjcBG2sgVJoIGGNw0XiOUFFoqYlMgZtKee4UYNu5JmJnvVqI5f9sijEevqbU--nncLmtfgE9gEml</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Prosodic and Segmental Rubrics in Emotion Identification</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Barra, R. ; Montero, J.M. ; Macias-Guarasa, J. ; D'Haro, L.F. ; San-Segundo, R. ; Cordoba, R.</creator><creatorcontrib>Barra, R. ; Montero, J.M. ; Macias-Guarasa, J. ; D'Haro, L.F. ; San-Segundo, R. ; Cordoba, R.</creatorcontrib><description>It is well known that the emotional state of a speaker usually alters the way she/he speaks. Although all the components of the voice can be affected by emotion in some statistically-significant way, not all these deviations from a neutral voice are identified by human listeners as conveying emotional information. In this paper we have carried out several perceptual and objective experiments that show the relevance of prosody and segmental spectrum in the characterization and identification of four emotions in Spanish. A Bayes classifier has been used in the objective emotion identification task. Emotion models were generated as the contribution of every emotion to the build-up of a universal background emotion codebook. According to our experiments, surprise is primarily identified by humans through its prosodic rubric (in spite of some automatically-identifiable segmental characteristics); while for anger the situation is just the opposite. Sadness and happiness need a combination of prosodic and segmental rubrics to be reliably identified</description><identifier>ISSN: 1520-6149</identifier><identifier>ISBN: 9781424404698</identifier><identifier>ISBN: 142440469X</identifier><identifier>EISSN: 2379-190X</identifier><identifier>DOI: 10.1109/ICASSP.2006.1660213</identifier><language>eng</language><publisher>IEEE</publisher><subject>Appraisal ; Emotion recognition ; Feature extraction ; Humans ; Prototypes ; Spatial databases ; Speech analysis ; Speech recognition ; Speech synthesis ; Telecommunications</subject><ispartof>2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, 2006, Vol.1, p.I-I</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/1660213$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,2052,4036,4037,27902,54895</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/1660213$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Barra, R.</creatorcontrib><creatorcontrib>Montero, J.M.</creatorcontrib><creatorcontrib>Macias-Guarasa, J.</creatorcontrib><creatorcontrib>D'Haro, L.F.</creatorcontrib><creatorcontrib>San-Segundo, R.</creatorcontrib><creatorcontrib>Cordoba, R.</creatorcontrib><title>Prosodic and Segmental Rubrics in Emotion Identification</title><title>2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings</title><addtitle>ICASSP</addtitle><description>It is well known that the emotional state of a speaker usually alters the way she/he speaks. Although all the components of the voice can be affected by emotion in some statistically-significant way, not all these deviations from a neutral voice are identified by human listeners as conveying emotional information. In this paper we have carried out several perceptual and objective experiments that show the relevance of prosody and segmental spectrum in the characterization and identification of four emotions in Spanish. A Bayes classifier has been used in the objective emotion identification task. Emotion models were generated as the contribution of every emotion to the build-up of a universal background emotion codebook. According to our experiments, surprise is primarily identified by humans through its prosodic rubric (in spite of some automatically-identifiable segmental characteristics); while for anger the situation is just the opposite. Sadness and happiness need a combination of prosodic and segmental rubrics to be reliably identified</description><subject>Appraisal</subject><subject>Emotion recognition</subject><subject>Feature extraction</subject><subject>Humans</subject><subject>Prototypes</subject><subject>Spatial databases</subject><subject>Speech analysis</subject><subject>Speech recognition</subject><subject>Speech synthesis</subject><subject>Telecommunications</subject><issn>1520-6149</issn><issn>2379-190X</issn><isbn>9781424404698</isbn><isbn>142440469X</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2006</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotj81qwzAQhEV_oCb1E-SiF7C7u5JX0rGEtA0EGuoceguyLBeV2C52eujbN6U5DcMHwzdCLBFKRHAPm9VjXe9KAuASmYFQXYmMlHEFOni_FrkzFjVpDZqdvREZVgQFo3Z3Ip_nTwBAx0YryoTdTeM8tilIP7Syjh99HE7-KN--mymFWaZBrvvxlMZBbtozSl0K_q_ei9vOH-eYX3Ih9k_r_eql2L4-nw23ReJKFZFtCIaQgjcBG2sgVJoIGGNw0XiOUFFoqYlMgZtKee4UYNu5JmJnvVqI5f9sijEevqbU--nncLmtfgE9gEml</recordid><startdate>2006</startdate><enddate>2006</enddate><creator>Barra, R.</creator><creator>Montero, J.M.</creator><creator>Macias-Guarasa, J.</creator><creator>D'Haro, L.F.</creator><creator>San-Segundo, R.</creator><creator>Cordoba, R.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>2006</creationdate><title>Prosodic and Segmental Rubrics in Emotion Identification</title><author>Barra, R. ; Montero, J.M. ; Macias-Guarasa, J. ; D'Haro, L.F. ; San-Segundo, R. ; Cordoba, R.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i653-e68cc7212ca7c1b870c5422061ec9e7a6e052cd2be62c6b53a6f301df9be1f8a3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2006</creationdate><topic>Appraisal</topic><topic>Emotion recognition</topic><topic>Feature extraction</topic><topic>Humans</topic><topic>Prototypes</topic><topic>Spatial databases</topic><topic>Speech analysis</topic><topic>Speech recognition</topic><topic>Speech synthesis</topic><topic>Telecommunications</topic><toplevel>online_resources</toplevel><creatorcontrib>Barra, R.</creatorcontrib><creatorcontrib>Montero, J.M.</creatorcontrib><creatorcontrib>Macias-Guarasa, J.</creatorcontrib><creatorcontrib>D'Haro, L.F.</creatorcontrib><creatorcontrib>San-Segundo, R.</creatorcontrib><creatorcontrib>Cordoba, R.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Barra, R.</au><au>Montero, J.M.</au><au>Macias-Guarasa, J.</au><au>D'Haro, L.F.</au><au>San-Segundo, R.</au><au>Cordoba, R.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Prosodic and Segmental Rubrics in Emotion Identification</atitle><btitle>2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings</btitle><stitle>ICASSP</stitle><date>2006</date><risdate>2006</risdate><volume>1</volume><spage>I</spage><epage>I</epage><pages>I-I</pages><issn>1520-6149</issn><eissn>2379-190X</eissn><isbn>9781424404698</isbn><isbn>142440469X</isbn><abstract>It is well known that the emotional state of a speaker usually alters the way she/he speaks. Although all the components of the voice can be affected by emotion in some statistically-significant way, not all these deviations from a neutral voice are identified by human listeners as conveying emotional information. In this paper we have carried out several perceptual and objective experiments that show the relevance of prosody and segmental spectrum in the characterization and identification of four emotions in Spanish. A Bayes classifier has been used in the objective emotion identification task. Emotion models were generated as the contribution of every emotion to the build-up of a universal background emotion codebook. According to our experiments, surprise is primarily identified by humans through its prosodic rubric (in spite of some automatically-identifiable segmental characteristics); while for anger the situation is just the opposite. Sadness and happiness need a combination of prosodic and segmental rubrics to be reliably identified</abstract><pub>IEEE</pub><doi>10.1109/ICASSP.2006.1660213</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1520-6149
ispartof 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, 2006, Vol.1, p.I-I
issn 1520-6149
2379-190X
language eng
recordid cdi_ieee_primary_1660213
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Appraisal
Emotion recognition
Feature extraction
Humans
Prototypes
Spatial databases
Speech analysis
Speech recognition
Speech synthesis
Telecommunications
title Prosodic and Segmental Rubrics in Emotion Identification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T12%3A00%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Prosodic%20and%20Segmental%20Rubrics%20in%20Emotion%20Identification&rft.btitle=2006%20IEEE%20International%20Conference%20on%20Acoustics%20Speech%20and%20Signal%20Processing%20Proceedings&rft.au=Barra,%20R.&rft.date=2006&rft.volume=1&rft.spage=I&rft.epage=I&rft.pages=I-I&rft.issn=1520-6149&rft.eissn=2379-190X&rft.isbn=9781424404698&rft.isbn_list=142440469X&rft_id=info:doi/10.1109/ICASSP.2006.1660213&rft_dat=%3Cieee_6IE%3E1660213%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=1660213&rfr_iscdi=true