Generating response in conversation

The present disclosure provides method and apparatus for generating a response in a human-machine conversation. A first sound input may be received in the conversation. A first audio attribute may be extracted from the first sound input, wherein the first audio attribute indicates a first condition...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Luan, Jian, Na, Xingyu, Xu, Xiang, Xiao, Zhe, Ju, Jianzhong, Xiu, Chi
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Luan, Jian
Na, Xingyu
Xu, Xiang
Xiao, Zhe
Ju, Jianzhong
Xiu, Chi
description The present disclosure provides method and apparatus for generating a response in a human-machine conversation. A first sound input may be received in the conversation. A first audio attribute may be extracted from the first sound input, wherein the first audio attribute indicates a first condition of a user. A second sound input may be received in the conversation. A second audio attribute may be extracted from the second sound input, wherein the second audio attribute indicates a second condition of a user. A difference between the second audio attribute and the first audio attribute is determined, wherein the difference indicates a condition change of the user from the first condition to the second condition. A response to the second sound input is generated based at least on the condition change.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US11922934B2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US11922934B2</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US11922934B23</originalsourceid><addsrcrecordid>eNrjZFB2T81LLUosycxLVyhKLS7IzytOVcjMU0jOzytLLSoGSuTn8TCwpiXmFKfyQmluBkU31xBnD93Ugvx4oJ7EZKAZJfGhwYaGlkZGlsYmTkbGxKgBAA9PJwE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Generating response in conversation</title><source>esp@cenet</source><creator>Luan, Jian ; Na, Xingyu ; Xu, Xiang ; Xiao, Zhe ; Ju, Jianzhong ; Xiu, Chi</creator><creatorcontrib>Luan, Jian ; Na, Xingyu ; Xu, Xiang ; Xiao, Zhe ; Ju, Jianzhong ; Xiu, Chi</creatorcontrib><description>The present disclosure provides method and apparatus for generating a response in a human-machine conversation. A first sound input may be received in the conversation. A first audio attribute may be extracted from the first sound input, wherein the first audio attribute indicates a first condition of a user. A second sound input may be received in the conversation. A second audio attribute may be extracted from the second sound input, wherein the second audio attribute indicates a second condition of a user. A difference between the second audio attribute and the first audio attribute is determined, wherein the difference indicates a condition change of the user from the first condition to the second condition. A response to the second sound input is generated based at least on the condition change.</description><language>eng</language><subject>ACOUSTICS ; CALCULATING ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240305&amp;DB=EPODOC&amp;CC=US&amp;NR=11922934B2$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240305&amp;DB=EPODOC&amp;CC=US&amp;NR=11922934B2$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Luan, Jian</creatorcontrib><creatorcontrib>Na, Xingyu</creatorcontrib><creatorcontrib>Xu, Xiang</creatorcontrib><creatorcontrib>Xiao, Zhe</creatorcontrib><creatorcontrib>Ju, Jianzhong</creatorcontrib><creatorcontrib>Xiu, Chi</creatorcontrib><title>Generating response in conversation</title><description>The present disclosure provides method and apparatus for generating a response in a human-machine conversation. A first sound input may be received in the conversation. A first audio attribute may be extracted from the first sound input, wherein the first audio attribute indicates a first condition of a user. A second sound input may be received in the conversation. A second audio attribute may be extracted from the second sound input, wherein the second audio attribute indicates a second condition of a user. A difference between the second audio attribute and the first audio attribute is determined, wherein the difference indicates a condition change of the user from the first condition to the second condition. A response to the second sound input is generated based at least on the condition change.</description><subject>ACOUSTICS</subject><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZFB2T81LLUosycxLVyhKLS7IzytOVcjMU0jOzytLLSoGSuTn8TCwpiXmFKfyQmluBkU31xBnD93Ugvx4oJ7EZKAZJfGhwYaGlkZGlsYmTkbGxKgBAA9PJwE</recordid><startdate>20240305</startdate><enddate>20240305</enddate><creator>Luan, Jian</creator><creator>Na, Xingyu</creator><creator>Xu, Xiang</creator><creator>Xiao, Zhe</creator><creator>Ju, Jianzhong</creator><creator>Xiu, Chi</creator><scope>EVB</scope></search><sort><creationdate>20240305</creationdate><title>Generating response in conversation</title><author>Luan, Jian ; Na, Xingyu ; Xu, Xiang ; Xiao, Zhe ; Ju, Jianzhong ; Xiu, Chi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US11922934B23</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2024</creationdate><topic>ACOUSTICS</topic><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>Luan, Jian</creatorcontrib><creatorcontrib>Na, Xingyu</creatorcontrib><creatorcontrib>Xu, Xiang</creatorcontrib><creatorcontrib>Xiao, Zhe</creatorcontrib><creatorcontrib>Ju, Jianzhong</creatorcontrib><creatorcontrib>Xiu, Chi</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Luan, Jian</au><au>Na, Xingyu</au><au>Xu, Xiang</au><au>Xiao, Zhe</au><au>Ju, Jianzhong</au><au>Xiu, Chi</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Generating response in conversation</title><date>2024-03-05</date><risdate>2024</risdate><abstract>The present disclosure provides method and apparatus for generating a response in a human-machine conversation. A first sound input may be received in the conversation. A first audio attribute may be extracted from the first sound input, wherein the first audio attribute indicates a first condition of a user. A second sound input may be received in the conversation. A second audio attribute may be extracted from the second sound input, wherein the second audio attribute indicates a second condition of a user. A difference between the second audio attribute and the first audio attribute is determined, wherein the difference indicates a condition change of the user from the first condition to the second condition. A response to the second sound input is generated based at least on the condition change.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US11922934B2
source esp@cenet
subjects ACOUSTICS
CALCULATING
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title Generating response in conversation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T08%3A46%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Luan,%20Jian&rft.date=2024-03-05&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS11922934B2%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true