Annotation and detection of blended emotions in real human-human dialogs recorded in a call center

In the context of call centers, emotion detection is potentially important for customer care. Emotions in natural interaction are often blended. For example, in a Stock Exchange service centre, some customers are angry because they are afraid to lose money. A 100 agent-client dialog corpus has been...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Vidrascu, L., Devillers, L.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page 4 pp.
container_title
container_volume
creator Vidrascu, L.
Devillers, L.
description In the context of call centers, emotion detection is potentially important for customer care. Emotions in natural interaction are often blended. For example, in a Stock Exchange service centre, some customers are angry because they are afraid to lose money. A 100 agent-client dialog corpus has been annotated at the speaker turn level with one label among 5 emotions including fear and anger. In this paper, we report on our experiments of automatic emotion detection using acoustic cues with several classifiers. 73% correct detection was achieved in discriminating between negative and neutral emotions and 60% between anger and fear. An analysis of the confusions led us to question the validity of the initial single valued annotation scheme. It was found that customer emotional states can be a mixture of anger and fear. As a result a new annotation scheme is used allowing the selection of two verbal labels per segment.
doi_str_mv 10.1109/ICME.2005.1521580
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_1521580</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>1521580</ieee_id><sourcerecordid>1521580</sourcerecordid><originalsourceid>FETCH-LOGICAL-i90t-4fc30fbcf935faef3d5634aa883c1b8aa5f648542fda5162f4d9193323bc117a3</originalsourceid><addsrcrecordid>eNo9UMtOwzAQtHhIVCUfgLj4BxK8cZzYx6oqUKmISw_cqo29BqPERkk48PekpWI12tHsjFbaZewORAEgzMN2_bIpSiFUAaoEpcUFW4CpVN5o_XbJMtNoMUMaKUFf_XsN3LBsHD_FXNIopcsFa1cxpgmnkCLH6LijiexJJc_bjqIjx6lPx9HIQ-QDYcc_vnuM-alzF7BL7-Ns2DQc03MIucWu45biRMMtu_bYjZSdecn2j5v9-jnfvT5t16tdHoyY8spbKXxrvZHKI3npVC0rRK2lhVYjKl9XWlWld6igLn3lDMwHlrK1AA3KJbv_WxuI6PA1hB6Hn8P5QfIXMABYjQ</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Annotation and detection of blended emotions in real human-human dialogs recorded in a call center</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Vidrascu, L. ; Devillers, L.</creator><creatorcontrib>Vidrascu, L. ; Devillers, L.</creatorcontrib><description>In the context of call centers, emotion detection is potentially important for customer care. Emotions in natural interaction are often blended. For example, in a Stock Exchange service centre, some customers are angry because they are afraid to lose money. A 100 agent-client dialog corpus has been annotated at the speaker turn level with one label among 5 emotions including fear and anger. In this paper, we report on our experiments of automatic emotion detection using acoustic cues with several classifiers. 73% correct detection was achieved in discriminating between negative and neutral emotions and 60% between anger and fear. An analysis of the confusions led us to question the validity of the initial single valued annotation scheme. It was found that customer emotional states can be a mixture of anger and fear. As a result a new annotation scheme is used allowing the selection of two verbal labels per segment.</description><identifier>ISSN: 1945-7871</identifier><identifier>ISBN: 9780780393318</identifier><identifier>ISBN: 0780393317</identifier><identifier>EISSN: 1945-788X</identifier><identifier>DOI: 10.1109/ICME.2005.1521580</identifier><language>eng</language><publisher>IEEE</publisher><subject>Acoustic signal detection ; Appraisal ; Context-aware services ; Emotion recognition ; Loudspeakers ; Network synthesis ; Social factors ; Speech recognition ; Speech synthesis ; Stock markets</subject><ispartof>2005 IEEE International Conference on Multimedia and Expo, 2005, p.4 pp.</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/1521580$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,2058,4050,4051,27925,54920</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/1521580$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Vidrascu, L.</creatorcontrib><creatorcontrib>Devillers, L.</creatorcontrib><title>Annotation and detection of blended emotions in real human-human dialogs recorded in a call center</title><title>2005 IEEE International Conference on Multimedia and Expo</title><addtitle>ICME</addtitle><description>In the context of call centers, emotion detection is potentially important for customer care. Emotions in natural interaction are often blended. For example, in a Stock Exchange service centre, some customers are angry because they are afraid to lose money. A 100 agent-client dialog corpus has been annotated at the speaker turn level with one label among 5 emotions including fear and anger. In this paper, we report on our experiments of automatic emotion detection using acoustic cues with several classifiers. 73% correct detection was achieved in discriminating between negative and neutral emotions and 60% between anger and fear. An analysis of the confusions led us to question the validity of the initial single valued annotation scheme. It was found that customer emotional states can be a mixture of anger and fear. As a result a new annotation scheme is used allowing the selection of two verbal labels per segment.</description><subject>Acoustic signal detection</subject><subject>Appraisal</subject><subject>Context-aware services</subject><subject>Emotion recognition</subject><subject>Loudspeakers</subject><subject>Network synthesis</subject><subject>Social factors</subject><subject>Speech recognition</subject><subject>Speech synthesis</subject><subject>Stock markets</subject><issn>1945-7871</issn><issn>1945-788X</issn><isbn>9780780393318</isbn><isbn>0780393317</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2005</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNo9UMtOwzAQtHhIVCUfgLj4BxK8cZzYx6oqUKmISw_cqo29BqPERkk48PekpWI12tHsjFbaZewORAEgzMN2_bIpSiFUAaoEpcUFW4CpVN5o_XbJMtNoMUMaKUFf_XsN3LBsHD_FXNIopcsFa1cxpgmnkCLH6LijiexJJc_bjqIjx6lPx9HIQ-QDYcc_vnuM-alzF7BL7-Ns2DQc03MIucWu45biRMMtu_bYjZSdecn2j5v9-jnfvT5t16tdHoyY8spbKXxrvZHKI3npVC0rRK2lhVYjKl9XWlWld6igLn3lDMwHlrK1AA3KJbv_WxuI6PA1hB6Hn8P5QfIXMABYjQ</recordid><startdate>2005</startdate><enddate>2005</enddate><creator>Vidrascu, L.</creator><creator>Devillers, L.</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>2005</creationdate><title>Annotation and detection of blended emotions in real human-human dialogs recorded in a call center</title><author>Vidrascu, L. ; Devillers, L.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i90t-4fc30fbcf935faef3d5634aa883c1b8aa5f648542fda5162f4d9193323bc117a3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2005</creationdate><topic>Acoustic signal detection</topic><topic>Appraisal</topic><topic>Context-aware services</topic><topic>Emotion recognition</topic><topic>Loudspeakers</topic><topic>Network synthesis</topic><topic>Social factors</topic><topic>Speech recognition</topic><topic>Speech synthesis</topic><topic>Stock markets</topic><toplevel>online_resources</toplevel><creatorcontrib>Vidrascu, L.</creatorcontrib><creatorcontrib>Devillers, L.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library Online</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Vidrascu, L.</au><au>Devillers, L.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Annotation and detection of blended emotions in real human-human dialogs recorded in a call center</atitle><btitle>2005 IEEE International Conference on Multimedia and Expo</btitle><stitle>ICME</stitle><date>2005</date><risdate>2005</risdate><spage>4 pp.</spage><pages>4 pp.-</pages><issn>1945-7871</issn><eissn>1945-788X</eissn><isbn>9780780393318</isbn><isbn>0780393317</isbn><abstract>In the context of call centers, emotion detection is potentially important for customer care. Emotions in natural interaction are often blended. For example, in a Stock Exchange service centre, some customers are angry because they are afraid to lose money. A 100 agent-client dialog corpus has been annotated at the speaker turn level with one label among 5 emotions including fear and anger. In this paper, we report on our experiments of automatic emotion detection using acoustic cues with several classifiers. 73% correct detection was achieved in discriminating between negative and neutral emotions and 60% between anger and fear. An analysis of the confusions led us to question the validity of the initial single valued annotation scheme. It was found that customer emotional states can be a mixture of anger and fear. As a result a new annotation scheme is used allowing the selection of two verbal labels per segment.</abstract><pub>IEEE</pub><doi>10.1109/ICME.2005.1521580</doi></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1945-7871
ispartof 2005 IEEE International Conference on Multimedia and Expo, 2005, p.4 pp.
issn 1945-7871
1945-788X
language eng
recordid cdi_ieee_primary_1521580
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Acoustic signal detection
Appraisal
Context-aware services
Emotion recognition
Loudspeakers
Network synthesis
Social factors
Speech recognition
Speech synthesis
Stock markets
title Annotation and detection of blended emotions in real human-human dialogs recorded in a call center
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T09%3A28%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Annotation%20and%20detection%20of%20blended%20emotions%20in%20real%20human-human%20dialogs%20recorded%20in%20a%20call%20center&rft.btitle=2005%20IEEE%20International%20Conference%20on%20Multimedia%20and%20Expo&rft.au=Vidrascu,%20L.&rft.date=2005&rft.spage=4%20pp.&rft.pages=4%20pp.-&rft.issn=1945-7871&rft.eissn=1945-788X&rft.isbn=9780780393318&rft.isbn_list=0780393317&rft_id=info:doi/10.1109/ICME.2005.1521580&rft_dat=%3Cieee_6IE%3E1521580%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=1521580&rfr_iscdi=true