Employing Fujisaki’s Intonation Model Parameters for Emotion Recognition

In this paper we are introducing the employment of features extracted from Fujisaki’s parameterization of pitch contour for the task of emotion recognition from speech. In evaluating the proposed features we have trained a decision tree inducer as well as the instance based learning algorithm. The d...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zervas, Panagiotis, Mporas, Iosif, Fakotakis, Nikos, Kokkinakis, George
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 453
container_issue
container_start_page 443
container_title
container_volume
creator Zervas, Panagiotis
Mporas, Iosif
Fakotakis, Nikos
Kokkinakis, George
description In this paper we are introducing the employment of features extracted from Fujisaki’s parameterization of pitch contour for the task of emotion recognition from speech. In evaluating the proposed features we have trained a decision tree inducer as well as the instance based learning algorithm. The datasets utilized for training the classification models, were extracted from two emotional speech databases. Fujisaki’s parameters benefited all prediction models with an average raise of 9,52% in the total accuracy.
doi_str_mv 10.1007/11752912_44
format Conference Proceeding
fullrecord <record><control><sourceid>pascalfrancis_sprin</sourceid><recordid>TN_cdi_pascalfrancis_primary_19152094</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>19152094</sourcerecordid><originalsourceid>FETCH-LOGICAL-p219t-1bb26183113c9b712854906abf08da958df97ebd041987bd260a16258d07fcf53</originalsourceid><addsrcrecordid>eNpNkL9OwzAYxM0_ibZ04gWyMDAEvs92YntEVQtFRSAEs2UnTpU2iaM4HbrxGrweT0JLQWK60_1ONxwhlwg3CCBuEUVCFVLN-REZsoQD44hSHpMBpogxY1ydkLES8o8JeUoGwIDGSnB2ToYhrACACkUH5HFat5Xfls0ymm1WZTDr8uvjM0TzpveN6UvfRE8-d1X0YjpTu951ISp8F01r_wNfXeaXTbn3F-SsMFVw418dkffZ9G3yEC-e7-eTu0XcUlR9jNbSFCVDZJmyAqlMuILU2AJkblQi80IJZ3PgqKSwOU3BYEp3OYgiKxI2IleH3daEzFRFZ5qsDLrtytp0W40KEwqK73rXh17YoWbpOm29XweNoPdP6n9Psm_2kmEC</addsrcrecordid><sourcetype>Index Database</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Employing Fujisaki’s Intonation Model Parameters for Emotion Recognition</title><source>Springer Books</source><creator>Zervas, Panagiotis ; Mporas, Iosif ; Fakotakis, Nikos ; Kokkinakis, George</creator><contributor>Antoniou, Grigoris ; Plexousakis, Dimitris ; Potamias, George ; Spyropoulos, Costas</contributor><creatorcontrib>Zervas, Panagiotis ; Mporas, Iosif ; Fakotakis, Nikos ; Kokkinakis, George ; Antoniou, Grigoris ; Plexousakis, Dimitris ; Potamias, George ; Spyropoulos, Costas</creatorcontrib><description>In this paper we are introducing the employment of features extracted from Fujisaki’s parameterization of pitch contour for the task of emotion recognition from speech. In evaluating the proposed features we have trained a decision tree inducer as well as the instance based learning algorithm. The datasets utilized for training the classification models, were extracted from two emotional speech databases. Fujisaki’s parameters benefited all prediction models with an average raise of 9,52% in the total accuracy.</description><identifier>ISSN: 0302-9743</identifier><identifier>ISBN: 9783540341178</identifier><identifier>ISBN: 354034117X</identifier><identifier>EISSN: 1611-3349</identifier><identifier>EISBN: 3540341188</identifier><identifier>EISBN: 9783540341185</identifier><identifier>DOI: 10.1007/11752912_44</identifier><language>eng</language><publisher>Berlin, Heidelberg: Springer Berlin Heidelberg</publisher><subject>Applied sciences ; Artificial intelligence ; Computer science; control theory; systems ; Emotion Category ; Emotion Recognition ; Emotional Speech ; Exact sciences and technology ; Pitch Contour ; Total Accuracy</subject><ispartof>Advances in Artificial Intelligence, 2006, p.443-453</ispartof><rights>Springer-Verlag Berlin Heidelberg 2006</rights><rights>2007 INIST-CNRS</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/11752912_44$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/11752912_44$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>310,311,780,781,785,790,791,794,4051,4052,27930,38260,41447,42516</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=19152094$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><contributor>Antoniou, Grigoris</contributor><contributor>Plexousakis, Dimitris</contributor><contributor>Potamias, George</contributor><contributor>Spyropoulos, Costas</contributor><creatorcontrib>Zervas, Panagiotis</creatorcontrib><creatorcontrib>Mporas, Iosif</creatorcontrib><creatorcontrib>Fakotakis, Nikos</creatorcontrib><creatorcontrib>Kokkinakis, George</creatorcontrib><title>Employing Fujisaki’s Intonation Model Parameters for Emotion Recognition</title><title>Advances in Artificial Intelligence</title><description>In this paper we are introducing the employment of features extracted from Fujisaki’s parameterization of pitch contour for the task of emotion recognition from speech. In evaluating the proposed features we have trained a decision tree inducer as well as the instance based learning algorithm. The datasets utilized for training the classification models, were extracted from two emotional speech databases. Fujisaki’s parameters benefited all prediction models with an average raise of 9,52% in the total accuracy.</description><subject>Applied sciences</subject><subject>Artificial intelligence</subject><subject>Computer science; control theory; systems</subject><subject>Emotion Category</subject><subject>Emotion Recognition</subject><subject>Emotional Speech</subject><subject>Exact sciences and technology</subject><subject>Pitch Contour</subject><subject>Total Accuracy</subject><issn>0302-9743</issn><issn>1611-3349</issn><isbn>9783540341178</isbn><isbn>354034117X</isbn><isbn>3540341188</isbn><isbn>9783540341185</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2006</creationdate><recordtype>conference_proceeding</recordtype><recordid>eNpNkL9OwzAYxM0_ibZ04gWyMDAEvs92YntEVQtFRSAEs2UnTpU2iaM4HbrxGrweT0JLQWK60_1ONxwhlwg3CCBuEUVCFVLN-REZsoQD44hSHpMBpogxY1ydkLES8o8JeUoGwIDGSnB2ToYhrACACkUH5HFat5Xfls0ymm1WZTDr8uvjM0TzpveN6UvfRE8-d1X0YjpTu951ISp8F01r_wNfXeaXTbn3F-SsMFVw418dkffZ9G3yEC-e7-eTu0XcUlR9jNbSFCVDZJmyAqlMuILU2AJkblQi80IJZ3PgqKSwOU3BYEp3OYgiKxI2IleH3daEzFRFZ5qsDLrtytp0W40KEwqK73rXh17YoWbpOm29XweNoPdP6n9Psm_2kmEC</recordid><startdate>2006</startdate><enddate>2006</enddate><creator>Zervas, Panagiotis</creator><creator>Mporas, Iosif</creator><creator>Fakotakis, Nikos</creator><creator>Kokkinakis, George</creator><general>Springer Berlin Heidelberg</general><general>Springer</general><scope>IQODW</scope></search><sort><creationdate>2006</creationdate><title>Employing Fujisaki’s Intonation Model Parameters for Emotion Recognition</title><author>Zervas, Panagiotis ; Mporas, Iosif ; Fakotakis, Nikos ; Kokkinakis, George</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p219t-1bb26183113c9b712854906abf08da958df97ebd041987bd260a16258d07fcf53</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2006</creationdate><topic>Applied sciences</topic><topic>Artificial intelligence</topic><topic>Computer science; control theory; systems</topic><topic>Emotion Category</topic><topic>Emotion Recognition</topic><topic>Emotional Speech</topic><topic>Exact sciences and technology</topic><topic>Pitch Contour</topic><topic>Total Accuracy</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zervas, Panagiotis</creatorcontrib><creatorcontrib>Mporas, Iosif</creatorcontrib><creatorcontrib>Fakotakis, Nikos</creatorcontrib><creatorcontrib>Kokkinakis, George</creatorcontrib><collection>Pascal-Francis</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zervas, Panagiotis</au><au>Mporas, Iosif</au><au>Fakotakis, Nikos</au><au>Kokkinakis, George</au><au>Antoniou, Grigoris</au><au>Plexousakis, Dimitris</au><au>Potamias, George</au><au>Spyropoulos, Costas</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Employing Fujisaki’s Intonation Model Parameters for Emotion Recognition</atitle><btitle>Advances in Artificial Intelligence</btitle><date>2006</date><risdate>2006</risdate><spage>443</spage><epage>453</epage><pages>443-453</pages><issn>0302-9743</issn><eissn>1611-3349</eissn><isbn>9783540341178</isbn><isbn>354034117X</isbn><eisbn>3540341188</eisbn><eisbn>9783540341185</eisbn><abstract>In this paper we are introducing the employment of features extracted from Fujisaki’s parameterization of pitch contour for the task of emotion recognition from speech. In evaluating the proposed features we have trained a decision tree inducer as well as the instance based learning algorithm. The datasets utilized for training the classification models, were extracted from two emotional speech databases. Fujisaki’s parameters benefited all prediction models with an average raise of 9,52% in the total accuracy.</abstract><cop>Berlin, Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/11752912_44</doi><tpages>11</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0302-9743
ispartof Advances in Artificial Intelligence, 2006, p.443-453
issn 0302-9743
1611-3349
language eng
recordid cdi_pascalfrancis_primary_19152094
source Springer Books
subjects Applied sciences
Artificial intelligence
Computer science
control theory
systems
Emotion Category
Emotion Recognition
Emotional Speech
Exact sciences and technology
Pitch Contour
Total Accuracy
title Employing Fujisaki’s Intonation Model Parameters for Emotion Recognition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-16T12%3A34%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-pascalfrancis_sprin&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Employing%20Fujisaki%E2%80%99s%20Intonation%20Model%20Parameters%20for%20Emotion%20Recognition&rft.btitle=Advances%20in%20Artificial%20Intelligence&rft.au=Zervas,%20Panagiotis&rft.date=2006&rft.spage=443&rft.epage=453&rft.pages=443-453&rft.issn=0302-9743&rft.eissn=1611-3349&rft.isbn=9783540341178&rft.isbn_list=354034117X&rft_id=info:doi/10.1007/11752912_44&rft_dat=%3Cpascalfrancis_sprin%3E19152094%3C/pascalfrancis_sprin%3E%3Curl%3E%3C/url%3E&rft.eisbn=3540341188&rft.eisbn_list=9783540341185&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true