Voice response method based on neural network, storage medium and terminal equipment

The invention discloses a voice response method based on a neural network, a storage medium and terminal equipment, and the method comprises the steps: determining corresponding response voice according to user voice when the user voice inputted by a user is received; inputting the response voice in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: ZHAO ZHIBAO
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator ZHAO ZHIBAO
description The invention discloses a voice response method based on a neural network, a storage medium and terminal equipment, and the method comprises the steps: determining corresponding response voice according to user voice when the user voice inputted by a user is received; inputting the response voice into a preset expression feature generation model to obtain an expression feature sequence corresponding to the response voice; and determining a facial expression sequence corresponding to the response voice according to the expression feature sequence, and controlling a preset virtual image to playthe response voice and synchronously play the facial expression sequence. According to the invention, the expression feature sequence corresponding to the response voice is determined through the preset expression feature generation model; according to the facial expression corresponding to the expression feature sequence, the facial expression when the virtual image plays the response voice is controlled, so that the res
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN111383642A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN111383642A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN111383642A3</originalsourceid><addsrcrecordid>eNqNzbsKAjEQheE0FqK-w9hrESNiK4tiZbXYLuNmXIObmZgLvr4RfACrv_k4Z6raq7ieIFIKwonAU36IhRsmsiAMTCXiWJPfEp8rSFkiDl9nXfGAbCFT9I4roldxwRPnuZrccUy0-HWmlqdj25zXFKSrT9hTHeyai9ba7M1uuzmYf8wHpPk48Q</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Voice response method based on neural network, storage medium and terminal equipment</title><source>esp@cenet</source><creator>ZHAO ZHIBAO</creator><creatorcontrib>ZHAO ZHIBAO</creatorcontrib><description>The invention discloses a voice response method based on a neural network, a storage medium and terminal equipment, and the method comprises the steps: determining corresponding response voice according to user voice when the user voice inputted by a user is received; inputting the response voice into a preset expression feature generation model to obtain an expression feature sequence corresponding to the response voice; and determining a facial expression sequence corresponding to the response voice according to the expression feature sequence, and controlling a preset virtual image to playthe response voice and synchronously play the facial expression sequence. According to the invention, the expression feature sequence corresponding to the response voice is determined through the preset expression feature generation model; according to the facial expression corresponding to the expression feature sequence, the facial expression when the virtual image plays the response voice is controlled, so that the res</description><language>chi ; eng</language><subject>ACOUSTICS ; CALCULATING ; COMPUTING ; COUNTING ; HANDLING RECORD CARRIERS ; MUSICAL INSTRUMENTS ; PHYSICS ; PRESENTATION OF DATA ; RECOGNITION OF DATA ; RECORD CARRIERS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2020</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20200707&amp;DB=EPODOC&amp;CC=CN&amp;NR=111383642A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20200707&amp;DB=EPODOC&amp;CC=CN&amp;NR=111383642A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>ZHAO ZHIBAO</creatorcontrib><title>Voice response method based on neural network, storage medium and terminal equipment</title><description>The invention discloses a voice response method based on a neural network, a storage medium and terminal equipment, and the method comprises the steps: determining corresponding response voice according to user voice when the user voice inputted by a user is received; inputting the response voice into a preset expression feature generation model to obtain an expression feature sequence corresponding to the response voice; and determining a facial expression sequence corresponding to the response voice according to the expression feature sequence, and controlling a preset virtual image to playthe response voice and synchronously play the facial expression sequence. According to the invention, the expression feature sequence corresponding to the response voice is determined through the preset expression feature generation model; according to the facial expression corresponding to the expression feature sequence, the facial expression when the virtual image plays the response voice is controlled, so that the res</description><subject>ACOUSTICS</subject><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>HANDLING RECORD CARRIERS</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>PRESENTATION OF DATA</subject><subject>RECOGNITION OF DATA</subject><subject>RECORD CARRIERS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2020</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNzbsKAjEQheE0FqK-w9hrESNiK4tiZbXYLuNmXIObmZgLvr4RfACrv_k4Z6raq7ieIFIKwonAU36IhRsmsiAMTCXiWJPfEp8rSFkiDl9nXfGAbCFT9I4roldxwRPnuZrccUy0-HWmlqdj25zXFKSrT9hTHeyai9ba7M1uuzmYf8wHpPk48Q</recordid><startdate>20200707</startdate><enddate>20200707</enddate><creator>ZHAO ZHIBAO</creator><scope>EVB</scope></search><sort><creationdate>20200707</creationdate><title>Voice response method based on neural network, storage medium and terminal equipment</title><author>ZHAO ZHIBAO</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN111383642A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2020</creationdate><topic>ACOUSTICS</topic><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>HANDLING RECORD CARRIERS</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>PRESENTATION OF DATA</topic><topic>RECOGNITION OF DATA</topic><topic>RECORD CARRIERS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>ZHAO ZHIBAO</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>ZHAO ZHIBAO</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Voice response method based on neural network, storage medium and terminal equipment</title><date>2020-07-07</date><risdate>2020</risdate><abstract>The invention discloses a voice response method based on a neural network, a storage medium and terminal equipment, and the method comprises the steps: determining corresponding response voice according to user voice when the user voice inputted by a user is received; inputting the response voice into a preset expression feature generation model to obtain an expression feature sequence corresponding to the response voice; and determining a facial expression sequence corresponding to the response voice according to the expression feature sequence, and controlling a preset virtual image to playthe response voice and synchronously play the facial expression sequence. According to the invention, the expression feature sequence corresponding to the response voice is determined through the preset expression feature generation model; according to the facial expression corresponding to the expression feature sequence, the facial expression when the virtual image plays the response voice is controlled, so that the res</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN111383642A
source esp@cenet
subjects ACOUSTICS
CALCULATING
COMPUTING
COUNTING
HANDLING RECORD CARRIERS
MUSICAL INSTRUMENTS
PHYSICS
PRESENTATION OF DATA
RECOGNITION OF DATA
RECORD CARRIERS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title Voice response method based on neural network, storage medium and terminal equipment
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T05%3A02%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=ZHAO%20ZHIBAO&rft.date=2020-07-07&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN111383642A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true