EmoSphere-TTS: Emotional Style and Intensity Modeling via Spherical Emotion Vector for Controllable Emotional Text-to-Speech
Despite rapid advances in the field of emotional text-to-speech (TTS), recent studies primarily focus on mimicking the average style of a particular emotion. As a result, the ability to manipulate speech emotion remains constrained to several predefined labels, compromising the ability to reflect th...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-11 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Cho, Deok-Hyeon Hyung-Seok Oh Seung-Bin, Kim Lee, Sang-Hoon Lee, Seong-Whan |
description | Despite rapid advances in the field of emotional text-to-speech (TTS), recent studies primarily focus on mimicking the average style of a particular emotion. As a result, the ability to manipulate speech emotion remains constrained to several predefined labels, compromising the ability to reflect the nuanced variations of emotion. In this paper, we propose EmoSphere-TTS, which synthesizes expressive emotional speech by using a spherical emotion vector to control the emotional style and intensity of the synthetic speech. Without any human annotation, we use the arousal, valence, and dominance pseudo-labels to model the complex nature of emotion via a Cartesian-spherical transformation. Furthermore, we propose a dual conditional adversarial network to improve the quality of generated speech by reflecting the multi-aspect characteristics. The experimental results demonstrate the model ability to control emotional style and intensity with high-quality expressive speech. |
doi_str_mv | 10.48550/arxiv.2406.07803 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2406_07803</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3067555748</sourcerecordid><originalsourceid>FETCH-LOGICAL-a953-75c45a1304208155c280b8e75c916d941e9c3adc1fa106cb3460cca8d21c2e673</originalsourceid><addsrcrecordid>eNpFkF9LwzAUxYMgOOY-gE8GfM7M_6a-yZg6mPjQ4mvJ0sxldE1Ns7GCH964DXy4XO7lnMPhB8AdwVOuhMCPOhzdYUo5llOcKcyuwIgyRpDilN6ASd9vMcZUZlQINgI_850vuo0NFpVl8QTTGZ1vdQOLODQW6raGizbatndxgO--to1rv-DBaXiyOZOkFxP8tCb6ANdpZr6NwTeNXqWQ_9DSHiOKHhWdtWZzC67Xuunt5LLHoHyZl7M3tPx4Xcyel0jngqFMGC40YZhTrIgQhiq8Uja9cyLrnBObG6ZrQ9aaYGlWjEtsjFY1JYZambExuD_HntBUXXA7HYbqD1F1QpQUD2dFF_z33vax2vp9SIX7imGZCSEyrtgvcApqFg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3067555748</pqid></control><display><type>article</type><title>EmoSphere-TTS: Emotional Style and Intensity Modeling via Spherical Emotion Vector for Controllable Emotional Text-to-Speech</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Cho, Deok-Hyeon ; Hyung-Seok Oh ; Seung-Bin, Kim ; Lee, Sang-Hoon ; Lee, Seong-Whan</creator><creatorcontrib>Cho, Deok-Hyeon ; Hyung-Seok Oh ; Seung-Bin, Kim ; Lee, Sang-Hoon ; Lee, Seong-Whan</creatorcontrib><description>Despite rapid advances in the field of emotional text-to-speech (TTS), recent studies primarily focus on mimicking the average style of a particular emotion. As a result, the ability to manipulate speech emotion remains constrained to several predefined labels, compromising the ability to reflect the nuanced variations of emotion. In this paper, we propose EmoSphere-TTS, which synthesizes expressive emotional speech by using a spherical emotion vector to control the emotional style and intensity of the synthetic speech. Without any human annotation, we use the arousal, valence, and dominance pseudo-labels to model the complex nature of emotion via a Cartesian-spherical transformation. Furthermore, we propose a dual conditional adversarial network to improve the quality of generated speech by reflecting the multi-aspect characteristics. The experimental results demonstrate the model ability to control emotional style and intensity with high-quality expressive speech.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2406.07803</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Annotations ; Arousal ; Computer Science - Artificial Intelligence ; Computer Science - Sound ; Controllability ; Emotions ; Labels ; Speech recognition</subject><ispartof>arXiv.org, 2024-11</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27902</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.07803$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.21437/Interspeech.2024-398$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Cho, Deok-Hyeon</creatorcontrib><creatorcontrib>Hyung-Seok Oh</creatorcontrib><creatorcontrib>Seung-Bin, Kim</creatorcontrib><creatorcontrib>Lee, Sang-Hoon</creatorcontrib><creatorcontrib>Lee, Seong-Whan</creatorcontrib><title>EmoSphere-TTS: Emotional Style and Intensity Modeling via Spherical Emotion Vector for Controllable Emotional Text-to-Speech</title><title>arXiv.org</title><description>Despite rapid advances in the field of emotional text-to-speech (TTS), recent studies primarily focus on mimicking the average style of a particular emotion. As a result, the ability to manipulate speech emotion remains constrained to several predefined labels, compromising the ability to reflect the nuanced variations of emotion. In this paper, we propose EmoSphere-TTS, which synthesizes expressive emotional speech by using a spherical emotion vector to control the emotional style and intensity of the synthetic speech. Without any human annotation, we use the arousal, valence, and dominance pseudo-labels to model the complex nature of emotion via a Cartesian-spherical transformation. Furthermore, we propose a dual conditional adversarial network to improve the quality of generated speech by reflecting the multi-aspect characteristics. The experimental results demonstrate the model ability to control emotional style and intensity with high-quality expressive speech.</description><subject>Annotations</subject><subject>Arousal</subject><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Sound</subject><subject>Controllability</subject><subject>Emotions</subject><subject>Labels</subject><subject>Speech recognition</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><sourceid>GOX</sourceid><recordid>eNpFkF9LwzAUxYMgOOY-gE8GfM7M_6a-yZg6mPjQ4mvJ0sxldE1Ns7GCH964DXy4XO7lnMPhB8AdwVOuhMCPOhzdYUo5llOcKcyuwIgyRpDilN6ASd9vMcZUZlQINgI_850vuo0NFpVl8QTTGZ1vdQOLODQW6raGizbatndxgO--to1rv-DBaXiyOZOkFxP8tCb6ANdpZr6NwTeNXqWQ_9DSHiOKHhWdtWZzC67Xuunt5LLHoHyZl7M3tPx4Xcyel0jngqFMGC40YZhTrIgQhiq8Uja9cyLrnBObG6ZrQ9aaYGlWjEtsjFY1JYZambExuD_HntBUXXA7HYbqD1F1QpQUD2dFF_z33vax2vp9SIX7imGZCSEyrtgvcApqFg</recordid><startdate>20241104</startdate><enddate>20241104</enddate><creator>Cho, Deok-Hyeon</creator><creator>Hyung-Seok Oh</creator><creator>Seung-Bin, Kim</creator><creator>Lee, Sang-Hoon</creator><creator>Lee, Seong-Whan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241104</creationdate><title>EmoSphere-TTS: Emotional Style and Intensity Modeling via Spherical Emotion Vector for Controllable Emotional Text-to-Speech</title><author>Cho, Deok-Hyeon ; Hyung-Seok Oh ; Seung-Bin, Kim ; Lee, Sang-Hoon ; Lee, Seong-Whan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a953-75c45a1304208155c280b8e75c916d941e9c3adc1fa106cb3460cca8d21c2e673</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Annotations</topic><topic>Arousal</topic><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Sound</topic><topic>Controllability</topic><topic>Emotions</topic><topic>Labels</topic><topic>Speech recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Cho, Deok-Hyeon</creatorcontrib><creatorcontrib>Hyung-Seok Oh</creatorcontrib><creatorcontrib>Seung-Bin, Kim</creatorcontrib><creatorcontrib>Lee, Sang-Hoon</creatorcontrib><creatorcontrib>Lee, Seong-Whan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cho, Deok-Hyeon</au><au>Hyung-Seok Oh</au><au>Seung-Bin, Kim</au><au>Lee, Sang-Hoon</au><au>Lee, Seong-Whan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>EmoSphere-TTS: Emotional Style and Intensity Modeling via Spherical Emotion Vector for Controllable Emotional Text-to-Speech</atitle><jtitle>arXiv.org</jtitle><date>2024-11-04</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Despite rapid advances in the field of emotional text-to-speech (TTS), recent studies primarily focus on mimicking the average style of a particular emotion. As a result, the ability to manipulate speech emotion remains constrained to several predefined labels, compromising the ability to reflect the nuanced variations of emotion. In this paper, we propose EmoSphere-TTS, which synthesizes expressive emotional speech by using a spherical emotion vector to control the emotional style and intensity of the synthetic speech. Without any human annotation, we use the arousal, valence, and dominance pseudo-labels to model the complex nature of emotion via a Cartesian-spherical transformation. Furthermore, we propose a dual conditional adversarial network to improve the quality of generated speech by reflecting the multi-aspect characteristics. The experimental results demonstrate the model ability to control emotional style and intensity with high-quality expressive speech.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2406.07803</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2406_07803 |
source | arXiv.org; Free E- Journals |
subjects | Annotations Arousal Computer Science - Artificial Intelligence Computer Science - Sound Controllability Emotions Labels Speech recognition |
title | EmoSphere-TTS: Emotional Style and Intensity Modeling via Spherical Emotion Vector for Controllable Emotional Text-to-Speech |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T11%3A27%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=EmoSphere-TTS:%20Emotional%20Style%20and%20Intensity%20Modeling%20via%20Spherical%20Emotion%20Vector%20for%20Controllable%20Emotional%20Text-to-Speech&rft.jtitle=arXiv.org&rft.au=Cho,%20Deok-Hyeon&rft.date=2024-11-04&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2406.07803&rft_dat=%3Cproquest_arxiv%3E3067555748%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3067555748&rft_id=info:pmid/&rfr_iscdi=true |