Virtual character expression driving method and system

The embodiment of the invention discloses a virtual character expression driving method and system. The method comprises the following steps: acquiring voice information of a user; analyzing the volume information of the voice information to obtain character information of the voice information; per...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: YIN CHUAN, LIANG SHUAIDONG, YU GUOJUN, YU QIANG
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator YIN CHUAN
LIANG SHUAIDONG
YU GUOJUN
YU QIANG
description The embodiment of the invention discloses a virtual character expression driving method and system. The method comprises the following steps: acquiring voice information of a user; analyzing the volume information of the voice information to obtain character information of the voice information; performing volume analysis on volume information in the voice information to obtain a corresponding mouth shape expression instruction; combining the character information of the voice information and the corresponding mouth shape expression instruction for semantic calculation to obtain a response; and converting the obtained response into response voice, and converting the response voice into expression and mouth shape animation data to drive the virtual character to make a corresponding expression and mouth shape. Generation of expression animations is simplified, and the invention can be widely applied to scenes such as intelligent sound boxes, intelligent robots and chat robots, so that the products are personifi
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN113506360A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN113506360A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN113506360A3</originalsourceid><addsrcrecordid>eNrjZDALyywqKU3MUUjOSCxKTC5JLVJIrSgoSi0uzszPU0gpyizLzEtXyE0tychPUUjMS1EoriwuSc3lYWBNS8wpTuWF0twMim6uIc4euqkF-fGpxQWJyal5qSXxzn6GhsamBmbGZgaOxsSoAQDaCi3_</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Virtual character expression driving method and system</title><source>esp@cenet</source><creator>YIN CHUAN ; LIANG SHUAIDONG ; YU GUOJUN ; YU QIANG</creator><creatorcontrib>YIN CHUAN ; LIANG SHUAIDONG ; YU GUOJUN ; YU QIANG</creatorcontrib><description>The embodiment of the invention discloses a virtual character expression driving method and system. The method comprises the following steps: acquiring voice information of a user; analyzing the volume information of the voice information to obtain character information of the voice information; performing volume analysis on volume information in the voice information to obtain a corresponding mouth shape expression instruction; combining the character information of the voice information and the corresponding mouth shape expression instruction for semantic calculation to obtain a response; and converting the obtained response into response voice, and converting the response voice into expression and mouth shape animation data to drive the virtual character to make a corresponding expression and mouth shape. Generation of expression animations is simplified, and the invention can be widely applied to scenes such as intelligent sound boxes, intelligent robots and chat robots, so that the products are personifi</description><language>chi ; eng</language><subject>ACOUSTICS ; CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2021</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20211015&amp;DB=EPODOC&amp;CC=CN&amp;NR=113506360A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25562,76317</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20211015&amp;DB=EPODOC&amp;CC=CN&amp;NR=113506360A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>YIN CHUAN</creatorcontrib><creatorcontrib>LIANG SHUAIDONG</creatorcontrib><creatorcontrib>YU GUOJUN</creatorcontrib><creatorcontrib>YU QIANG</creatorcontrib><title>Virtual character expression driving method and system</title><description>The embodiment of the invention discloses a virtual character expression driving method and system. The method comprises the following steps: acquiring voice information of a user; analyzing the volume information of the voice information to obtain character information of the voice information; performing volume analysis on volume information in the voice information to obtain a corresponding mouth shape expression instruction; combining the character information of the voice information and the corresponding mouth shape expression instruction for semantic calculation to obtain a response; and converting the obtained response into response voice, and converting the response voice into expression and mouth shape animation data to drive the virtual character to make a corresponding expression and mouth shape. Generation of expression animations is simplified, and the invention can be widely applied to scenes such as intelligent sound boxes, intelligent robots and chat robots, so that the products are personifi</description><subject>ACOUSTICS</subject><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2021</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZDALyywqKU3MUUjOSCxKTC5JLVJIrSgoSi0uzszPU0gpyizLzEtXyE0tychPUUjMS1EoriwuSc3lYWBNS8wpTuWF0twMim6uIc4euqkF-fGpxQWJyal5qSXxzn6GhsamBmbGZgaOxsSoAQDaCi3_</recordid><startdate>20211015</startdate><enddate>20211015</enddate><creator>YIN CHUAN</creator><creator>LIANG SHUAIDONG</creator><creator>YU GUOJUN</creator><creator>YU QIANG</creator><scope>EVB</scope></search><sort><creationdate>20211015</creationdate><title>Virtual character expression driving method and system</title><author>YIN CHUAN ; LIANG SHUAIDONG ; YU GUOJUN ; YU QIANG</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN113506360A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2021</creationdate><topic>ACOUSTICS</topic><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>YIN CHUAN</creatorcontrib><creatorcontrib>LIANG SHUAIDONG</creatorcontrib><creatorcontrib>YU GUOJUN</creatorcontrib><creatorcontrib>YU QIANG</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>YIN CHUAN</au><au>LIANG SHUAIDONG</au><au>YU GUOJUN</au><au>YU QIANG</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Virtual character expression driving method and system</title><date>2021-10-15</date><risdate>2021</risdate><abstract>The embodiment of the invention discloses a virtual character expression driving method and system. The method comprises the following steps: acquiring voice information of a user; analyzing the volume information of the voice information to obtain character information of the voice information; performing volume analysis on volume information in the voice information to obtain a corresponding mouth shape expression instruction; combining the character information of the voice information and the corresponding mouth shape expression instruction for semantic calculation to obtain a response; and converting the obtained response into response voice, and converting the response voice into expression and mouth shape animation data to drive the virtual character to make a corresponding expression and mouth shape. Generation of expression animations is simplified, and the invention can be widely applied to scenes such as intelligent sound boxes, intelligent robots and chat robots, so that the products are personifi</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN113506360A
source esp@cenet
subjects ACOUSTICS
CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title Virtual character expression driving method and system
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T21%3A00%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=YIN%20CHUAN&rft.date=2021-10-15&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN113506360A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true