Embedded Design of an Emotion-Aware Music Player
In this paper, a novel human-robot interaction(HRI) design is proposed where emotional recognition from the speech signal is used to create an emotion-aware music player that can be implemented in an embedded platform. The proposed system maps an inputted short-speech utterance to a two dimensional...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2533 |
---|---|
container_issue | |
container_start_page | 2528 |
container_title | |
container_volume | |
creator | Cervantes, Carlos A. Kai-Tai Song |
description | In this paper, a novel human-robot interaction(HRI) design is proposed where emotional recognition from the speech signal is used to create an emotion-aware music player that can be implemented in an embedded platform. The proposed system maps an inputted short-speech utterance to a two dimensional emotional plane of valence and arousal. This strategy allows the system to automatically select a piece of music from a database of songs, of which emotions are also expressed using arousal and valence values. Furthermore, a cheer-up strategy is proposed such that music songs with varying emotional content are played in order to cheer up the user to a more neutral/happy state. The proposed system has been implemented in a Beagle board. The online test verified the feasibility of the system. A questionnaire survey shows that 80% of subjects agree with the songs selected by the proposed cheer-up strategy based on the emotional model. |
doi_str_mv | 10.1109/SMC.2013.431 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_6722184</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6722184</ieee_id><sourcerecordid>6722184</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-9f85ddeabc03a98794359178cf4b2b450d8e7557b9d84b55c42e662e9e4ee05b3</originalsourceid><addsrcrecordid>eNotzMtKxDAUANAoCnZGd-7c5AdSk5vcPJZDrQ-YQUEFd0PS3kpk2ko7IvP3Cro6u8PYpZKlUjJcP2-qEqTSpdHqiC2UcSFIiwDHrAB0TiiLeMIKJS2IAPB2xhbz_CElSKN8wWTdJ2pbavkNzfl94GPH48DrftzncRCr7zgR33zNueFPu3ig6ZyddnE308W_S_Z6W79U92L9ePdQrdYiK4d7ETqPv21MjdQxeBeMxqCcbzqTIBmUrSeH6FJovUmIjQGyFiiQIZKY9JJd_b2ZiLafU-7jdNhaB6C80T9IMkLk</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Embedded Design of an Emotion-Aware Music Player</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Cervantes, Carlos A. ; Kai-Tai Song</creator><creatorcontrib>Cervantes, Carlos A. ; Kai-Tai Song</creatorcontrib><description>In this paper, a novel human-robot interaction(HRI) design is proposed where emotional recognition from the speech signal is used to create an emotion-aware music player that can be implemented in an embedded platform. The proposed system maps an inputted short-speech utterance to a two dimensional emotional plane of valence and arousal. This strategy allows the system to automatically select a piece of music from a database of songs, of which emotions are also expressed using arousal and valence values. Furthermore, a cheer-up strategy is proposed such that music songs with varying emotional content are played in order to cheer up the user to a more neutral/happy state. The proposed system has been implemented in a Beagle board. The online test verified the feasibility of the system. A questionnaire survey shows that 80% of subjects agree with the songs selected by the proposed cheer-up strategy based on the emotional model.</description><identifier>ISSN: 1062-922X</identifier><identifier>EISSN: 2577-1655</identifier><identifier>EISBN: 1479906522</identifier><identifier>EISBN: 9781479906529</identifier><identifier>DOI: 10.1109/SMC.2013.431</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>Emotion recognition ; emotional model ; Feature extraction ; human-robot interaction ; Speech ; Speech processing ; Speech recognition ; Vectors</subject><ispartof>2013 IEEE International Conference on Systems, Man, and Cybernetics, 2013, p.2528-2533</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6722184$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,2052,27902,54895</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6722184$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Cervantes, Carlos A.</creatorcontrib><creatorcontrib>Kai-Tai Song</creatorcontrib><title>Embedded Design of an Emotion-Aware Music Player</title><title>2013 IEEE International Conference on Systems, Man, and Cybernetics</title><addtitle>smc</addtitle><description>In this paper, a novel human-robot interaction(HRI) design is proposed where emotional recognition from the speech signal is used to create an emotion-aware music player that can be implemented in an embedded platform. The proposed system maps an inputted short-speech utterance to a two dimensional emotional plane of valence and arousal. This strategy allows the system to automatically select a piece of music from a database of songs, of which emotions are also expressed using arousal and valence values. Furthermore, a cheer-up strategy is proposed such that music songs with varying emotional content are played in order to cheer up the user to a more neutral/happy state. The proposed system has been implemented in a Beagle board. The online test verified the feasibility of the system. A questionnaire survey shows that 80% of subjects agree with the songs selected by the proposed cheer-up strategy based on the emotional model.</description><subject>Emotion recognition</subject><subject>emotional model</subject><subject>Feature extraction</subject><subject>human-robot interaction</subject><subject>Speech</subject><subject>Speech processing</subject><subject>Speech recognition</subject><subject>Vectors</subject><issn>1062-922X</issn><issn>2577-1655</issn><isbn>1479906522</isbn><isbn>9781479906529</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2013</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotzMtKxDAUANAoCnZGd-7c5AdSk5vcPJZDrQ-YQUEFd0PS3kpk2ko7IvP3Cro6u8PYpZKlUjJcP2-qEqTSpdHqiC2UcSFIiwDHrAB0TiiLeMIKJS2IAPB2xhbz_CElSKN8wWTdJ2pbavkNzfl94GPH48DrftzncRCr7zgR33zNueFPu3ig6ZyddnE308W_S_Z6W79U92L9ePdQrdYiK4d7ETqPv21MjdQxeBeMxqCcbzqTIBmUrSeH6FJovUmIjQGyFiiQIZKY9JJd_b2ZiLafU-7jdNhaB6C80T9IMkLk</recordid><startdate>201310</startdate><enddate>201310</enddate><creator>Cervantes, Carlos A.</creator><creator>Kai-Tai Song</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201310</creationdate><title>Embedded Design of an Emotion-Aware Music Player</title><author>Cervantes, Carlos A. ; Kai-Tai Song</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-9f85ddeabc03a98794359178cf4b2b450d8e7557b9d84b55c42e662e9e4ee05b3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2013</creationdate><topic>Emotion recognition</topic><topic>emotional model</topic><topic>Feature extraction</topic><topic>human-robot interaction</topic><topic>Speech</topic><topic>Speech processing</topic><topic>Speech recognition</topic><topic>Vectors</topic><toplevel>online_resources</toplevel><creatorcontrib>Cervantes, Carlos A.</creatorcontrib><creatorcontrib>Kai-Tai Song</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Cervantes, Carlos A.</au><au>Kai-Tai Song</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Embedded Design of an Emotion-Aware Music Player</atitle><btitle>2013 IEEE International Conference on Systems, Man, and Cybernetics</btitle><stitle>smc</stitle><date>2013-10</date><risdate>2013</risdate><spage>2528</spage><epage>2533</epage><pages>2528-2533</pages><issn>1062-922X</issn><eissn>2577-1655</eissn><eisbn>1479906522</eisbn><eisbn>9781479906529</eisbn><coden>IEEPAD</coden><abstract>In this paper, a novel human-robot interaction(HRI) design is proposed where emotional recognition from the speech signal is used to create an emotion-aware music player that can be implemented in an embedded platform. The proposed system maps an inputted short-speech utterance to a two dimensional emotional plane of valence and arousal. This strategy allows the system to automatically select a piece of music from a database of songs, of which emotions are also expressed using arousal and valence values. Furthermore, a cheer-up strategy is proposed such that music songs with varying emotional content are played in order to cheer up the user to a more neutral/happy state. The proposed system has been implemented in a Beagle board. The online test verified the feasibility of the system. A questionnaire survey shows that 80% of subjects agree with the songs selected by the proposed cheer-up strategy based on the emotional model.</abstract><pub>IEEE</pub><doi>10.1109/SMC.2013.431</doi><tpages>6</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1062-922X |
ispartof | 2013 IEEE International Conference on Systems, Man, and Cybernetics, 2013, p.2528-2533 |
issn | 1062-922X 2577-1655 |
language | eng |
recordid | cdi_ieee_primary_6722184 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | Emotion recognition emotional model Feature extraction human-robot interaction Speech Speech processing Speech recognition Vectors |
title | Embedded Design of an Emotion-Aware Music Player |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T20%3A20%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Embedded%20Design%20of%20an%20Emotion-Aware%20Music%20Player&rft.btitle=2013%20IEEE%20International%20Conference%20on%20Systems,%20Man,%20and%20Cybernetics&rft.au=Cervantes,%20Carlos%20A.&rft.date=2013-10&rft.spage=2528&rft.epage=2533&rft.pages=2528-2533&rft.issn=1062-922X&rft.eissn=2577-1655&rft.coden=IEEPAD&rft_id=info:doi/10.1109/SMC.2013.431&rft_dat=%3Cieee_6IE%3E6722184%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=1479906522&rft.eisbn_list=9781479906529&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=6722184&rfr_iscdi=true |