EMOTIONAL SPEECH GENERATING METHOD AND APPARATUS FOR CONTROLLING EMOTIONAL INTENSITY

An emotional speech generating method and apparatus capable of adjusting an emotional intensity is disclosed. The emotional speech generating method includes generating emotion groups by grouping weight vectors representing a same emotion into a same emotion group, determining an internal distance b...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: OH, Sangshin, LEE, Tae Jin, JANG, Inseon, KANG, Hong-Goo, AHN, Chung Hyun, UM, Se-Yun
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator OH, Sangshin
LEE, Tae Jin
JANG, Inseon
KANG, Hong-Goo
AHN, Chung Hyun
UM, Se-Yun
description An emotional speech generating method and apparatus capable of adjusting an emotional intensity is disclosed. The emotional speech generating method includes generating emotion groups by grouping weight vectors representing a same emotion into a same emotion group, determining an internal distance between weight vectors included in a same emotion group, determining an external distance between weight vectors included in a same emotion group and weight vectors included in another emotion group, determining a representative weight vector of each of the emotion groups based on the internal distance and the external distance, generating a style embedding by applying the representative weight vector of each of the emotion groups to a style token including prosodic information for expressing an emotion, and generating an emotional speech expressing the emotion using the style embedding.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2021090551A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2021090551A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2021090551A13</originalsourceid><addsrcrecordid>eNrjZAhx9fUP8fT3c_RRCA5wdXX2UHB39XMNcgzx9HNX8HUN8fB3UXD0A-KAAEegaGiwgpt_kIKzv19IkL-PD0gRwgBPvxBXv2DPkEgeBta0xJziVF4ozc2g7OYa4uyhm1qQH59aXJCYnJqXWhIfGmxkYGRoYGlgamroaGhMnCoAGZMxaw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>EMOTIONAL SPEECH GENERATING METHOD AND APPARATUS FOR CONTROLLING EMOTIONAL INTENSITY</title><source>esp@cenet</source><creator>OH, Sangshin ; LEE, Tae Jin ; JANG, Inseon ; KANG, Hong-Goo ; AHN, Chung Hyun ; UM, Se-Yun</creator><creatorcontrib>OH, Sangshin ; LEE, Tae Jin ; JANG, Inseon ; KANG, Hong-Goo ; AHN, Chung Hyun ; UM, Se-Yun</creatorcontrib><description>An emotional speech generating method and apparatus capable of adjusting an emotional intensity is disclosed. The emotional speech generating method includes generating emotion groups by grouping weight vectors representing a same emotion into a same emotion group, determining an internal distance between weight vectors included in a same emotion group, determining an external distance between weight vectors included in a same emotion group and weight vectors included in another emotion group, determining a representative weight vector of each of the emotion groups based on the internal distance and the external distance, generating a style embedding by applying the representative weight vector of each of the emotion groups to a style token including prosodic information for expressing an emotion, and generating an emotional speech expressing the emotion using the style embedding.</description><language>eng</language><subject>ACOUSTICS ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2021</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20210325&amp;DB=EPODOC&amp;CC=US&amp;NR=2021090551A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,777,882,25545,76296</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20210325&amp;DB=EPODOC&amp;CC=US&amp;NR=2021090551A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>OH, Sangshin</creatorcontrib><creatorcontrib>LEE, Tae Jin</creatorcontrib><creatorcontrib>JANG, Inseon</creatorcontrib><creatorcontrib>KANG, Hong-Goo</creatorcontrib><creatorcontrib>AHN, Chung Hyun</creatorcontrib><creatorcontrib>UM, Se-Yun</creatorcontrib><title>EMOTIONAL SPEECH GENERATING METHOD AND APPARATUS FOR CONTROLLING EMOTIONAL INTENSITY</title><description>An emotional speech generating method and apparatus capable of adjusting an emotional intensity is disclosed. The emotional speech generating method includes generating emotion groups by grouping weight vectors representing a same emotion into a same emotion group, determining an internal distance between weight vectors included in a same emotion group, determining an external distance between weight vectors included in a same emotion group and weight vectors included in another emotion group, determining a representative weight vector of each of the emotion groups based on the internal distance and the external distance, generating a style embedding by applying the representative weight vector of each of the emotion groups to a style token including prosodic information for expressing an emotion, and generating an emotional speech expressing the emotion using the style embedding.</description><subject>ACOUSTICS</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2021</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZAhx9fUP8fT3c_RRCA5wdXX2UHB39XMNcgzx9HNX8HUN8fB3UXD0A-KAAEegaGiwgpt_kIKzv19IkL-PD0gRwgBPvxBXv2DPkEgeBta0xJziVF4ozc2g7OYa4uyhm1qQH59aXJCYnJqXWhIfGmxkYGRoYGlgamroaGhMnCoAGZMxaw</recordid><startdate>20210325</startdate><enddate>20210325</enddate><creator>OH, Sangshin</creator><creator>LEE, Tae Jin</creator><creator>JANG, Inseon</creator><creator>KANG, Hong-Goo</creator><creator>AHN, Chung Hyun</creator><creator>UM, Se-Yun</creator><scope>EVB</scope></search><sort><creationdate>20210325</creationdate><title>EMOTIONAL SPEECH GENERATING METHOD AND APPARATUS FOR CONTROLLING EMOTIONAL INTENSITY</title><author>OH, Sangshin ; LEE, Tae Jin ; JANG, Inseon ; KANG, Hong-Goo ; AHN, Chung Hyun ; UM, Se-Yun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2021090551A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2021</creationdate><topic>ACOUSTICS</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>OH, Sangshin</creatorcontrib><creatorcontrib>LEE, Tae Jin</creatorcontrib><creatorcontrib>JANG, Inseon</creatorcontrib><creatorcontrib>KANG, Hong-Goo</creatorcontrib><creatorcontrib>AHN, Chung Hyun</creatorcontrib><creatorcontrib>UM, Se-Yun</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>OH, Sangshin</au><au>LEE, Tae Jin</au><au>JANG, Inseon</au><au>KANG, Hong-Goo</au><au>AHN, Chung Hyun</au><au>UM, Se-Yun</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>EMOTIONAL SPEECH GENERATING METHOD AND APPARATUS FOR CONTROLLING EMOTIONAL INTENSITY</title><date>2021-03-25</date><risdate>2021</risdate><abstract>An emotional speech generating method and apparatus capable of adjusting an emotional intensity is disclosed. The emotional speech generating method includes generating emotion groups by grouping weight vectors representing a same emotion into a same emotion group, determining an internal distance between weight vectors included in a same emotion group, determining an external distance between weight vectors included in a same emotion group and weight vectors included in another emotion group, determining a representative weight vector of each of the emotion groups based on the internal distance and the external distance, generating a style embedding by applying the representative weight vector of each of the emotion groups to a style token including prosodic information for expressing an emotion, and generating an emotional speech expressing the emotion using the style embedding.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US2021090551A1
source esp@cenet
subjects ACOUSTICS
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title EMOTIONAL SPEECH GENERATING METHOD AND APPARATUS FOR CONTROLLING EMOTIONAL INTENSITY
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T03%3A49%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=OH,%20Sangshin&rft.date=2021-03-25&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2021090551A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true