Method for detecting emotions from speech using speaker identification

To reduce the error rate when classifying emotions from an acoustical speech input (SI) only, it is suggested to include a process of speaker identification to obtain certain speaker identification data (SID) on the basis of which the process of recognizing an emotional state is adapted and/or confi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: TATO RAQUEL, KEMP THOMAS, KOMPE RALF
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator TATO RAQUEL
KEMP THOMAS
KOMPE RALF
description To reduce the error rate when classifying emotions from an acoustical speech input (SI) only, it is suggested to include a process of speaker identification to obtain certain speaker identification data (SID) on the basis of which the process of recognizing an emotional state is adapted and/or configured. In particular, speaker-specific feature extractors (FE) and/or emotion classifiers (EC) are selected based on said speaker identification data (SID).
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US7373301B2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US7373301B2</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US7373301B23</originalsourceid><addsrcrecordid>eNqNijEOwjAQBN1QIOAP9wGkgIv0ICKaVEAdWfaanEJ8ln38HyLxAKrRaGZtuh46SqAohQIUXjk9CbMoS6oUi8xUM-BHetclfcVNKMQBSTmyd8u5NavoXhW7HzeGusv9fN0jy4CanUeCDo9ba1trm8PpaP9YPvqzM8M</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Method for detecting emotions from speech using speaker identification</title><source>esp@cenet</source><creator>TATO RAQUEL ; KEMP THOMAS ; KOMPE RALF</creator><creatorcontrib>TATO RAQUEL ; KEMP THOMAS ; KOMPE RALF</creatorcontrib><description>To reduce the error rate when classifying emotions from an acoustical speech input (SI) only, it is suggested to include a process of speaker identification to obtain certain speaker identification data (SID) on the basis of which the process of recognizing an emotional state is adapted and/or configured. In particular, speaker-specific feature extractors (FE) and/or emotion classifiers (EC) are selected based on said speaker identification data (SID).</description><language>eng</language><subject>ACOUSTICS ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2008</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20080513&amp;DB=EPODOC&amp;CC=US&amp;NR=7373301B2$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20080513&amp;DB=EPODOC&amp;CC=US&amp;NR=7373301B2$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>TATO RAQUEL</creatorcontrib><creatorcontrib>KEMP THOMAS</creatorcontrib><creatorcontrib>KOMPE RALF</creatorcontrib><title>Method for detecting emotions from speech using speaker identification</title><description>To reduce the error rate when classifying emotions from an acoustical speech input (SI) only, it is suggested to include a process of speaker identification to obtain certain speaker identification data (SID) on the basis of which the process of recognizing an emotional state is adapted and/or configured. In particular, speaker-specific feature extractors (FE) and/or emotion classifiers (EC) are selected based on said speaker identification data (SID).</description><subject>ACOUSTICS</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2008</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNijEOwjAQBN1QIOAP9wGkgIv0ICKaVEAdWfaanEJ8ln38HyLxAKrRaGZtuh46SqAohQIUXjk9CbMoS6oUi8xUM-BHetclfcVNKMQBSTmyd8u5NavoXhW7HzeGusv9fN0jy4CanUeCDo9ba1trm8PpaP9YPvqzM8M</recordid><startdate>20080513</startdate><enddate>20080513</enddate><creator>TATO RAQUEL</creator><creator>KEMP THOMAS</creator><creator>KOMPE RALF</creator><scope>EVB</scope></search><sort><creationdate>20080513</creationdate><title>Method for detecting emotions from speech using speaker identification</title><author>TATO RAQUEL ; KEMP THOMAS ; KOMPE RALF</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US7373301B23</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2008</creationdate><topic>ACOUSTICS</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>TATO RAQUEL</creatorcontrib><creatorcontrib>KEMP THOMAS</creatorcontrib><creatorcontrib>KOMPE RALF</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>TATO RAQUEL</au><au>KEMP THOMAS</au><au>KOMPE RALF</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Method for detecting emotions from speech using speaker identification</title><date>2008-05-13</date><risdate>2008</risdate><abstract>To reduce the error rate when classifying emotions from an acoustical speech input (SI) only, it is suggested to include a process of speaker identification to obtain certain speaker identification data (SID) on the basis of which the process of recognizing an emotional state is adapted and/or configured. In particular, speaker-specific feature extractors (FE) and/or emotion classifiers (EC) are selected based on said speaker identification data (SID).</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US7373301B2
source esp@cenet
subjects ACOUSTICS
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title Method for detecting emotions from speech using speaker identification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T08%3A27%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=TATO%20RAQUEL&rft.date=2008-05-13&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS7373301B2%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true