Multi-modal model for dynamically responding to virtual characters

The disclosed embodiments relate to a method of controlling a virtual character (or "avatar") using a multi-modal model. The multi-modal model may process a variety of input information related to a user and process the input information using a plurality of internal models. The multi-moda...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: MCINTYRE-COVIN ARMANDO, HOURIGAN, RYAN, EISENBERG JOSH
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator MCINTYRE-COVIN ARMANDO
HOURIGAN, RYAN
EISENBERG JOSH
description The disclosed embodiments relate to a method of controlling a virtual character (or "avatar") using a multi-modal model. The multi-modal model may process a variety of input information related to a user and process the input information using a plurality of internal models. The multi-modal model may incorporate the internal model to make trusted, such as emotional, responses through the virtual character. A link of a virtual character may be embedded on a web browser, and an avatar may be dynamically generated based on a selection of a user to interact with the virtual character. A report may be generated for the customer that provides insight with features of the user interacting with the virtual character associated with the customer. 所公开的实施例涉及一种使用多模态模型控制虚拟角色(或"化身")的方法。多模态模型可以处理与用户有关的多种输入信息并使用多个内部模型处理输入信息。多模态模型可以结合内部模型以通过虚拟角色做出可信且诸如情感的响应。虚拟角色的链接可以被嵌入在网络浏览器上,并且可以基于用户与虚拟角色交互的选择动态地生成化身。可以为客户生成报告,该报告提供关于与和客户相关联的虚拟角色交互的用户的特征的洞见。
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN114303116A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN114303116A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN114303116A3</originalsourceid><addsrcrecordid>eNrjZHDyLc0pydTNzU9JzFEAkqk5Cmn5RQoplXmJuZnJiTk5lQpFqcUF-XkpmXnpCiX5CmWZRSWlQLXJGYlFicklqUXFPAysaYk5xam8UJqbQdHNNcTZQze1ID8eqDcxOTUvtSTe2c_Q0MTYwNjQ0MzRmBg1AGunMmg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Multi-modal model for dynamically responding to virtual characters</title><source>esp@cenet</source><creator>MCINTYRE-COVIN ARMANDO ; HOURIGAN, RYAN ; EISENBERG JOSH</creator><creatorcontrib>MCINTYRE-COVIN ARMANDO ; HOURIGAN, RYAN ; EISENBERG JOSH</creatorcontrib><description>The disclosed embodiments relate to a method of controlling a virtual character (or "avatar") using a multi-modal model. The multi-modal model may process a variety of input information related to a user and process the input information using a plurality of internal models. The multi-modal model may incorporate the internal model to make trusted, such as emotional, responses through the virtual character. A link of a virtual character may be embedded on a web browser, and an avatar may be dynamically generated based on a selection of a user to interact with the virtual character. A report may be generated for the customer that provides insight with features of the user interacting with the virtual character associated with the customer. 所公开的实施例涉及一种使用多模态模型控制虚拟角色(或"化身")的方法。多模态模型可以处理与用户有关的多种输入信息并使用多个内部模型处理输入信息。多模态模型可以结合内部模型以通过虚拟角色做出可信且诸如情感的响应。虚拟角色的链接可以被嵌入在网络浏览器上,并且可以基于用户与虚拟角色交互的选择动态地生成化身。可以为客户生成报告,该报告提供关于与和客户相关联的虚拟角色交互的用户的特征的洞见。</description><language>chi ; eng</language><subject>ACOUSTICS ; CALCULATING ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2022</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20220408&amp;DB=EPODOC&amp;CC=CN&amp;NR=114303116A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,778,883,25551,76302</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20220408&amp;DB=EPODOC&amp;CC=CN&amp;NR=114303116A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>MCINTYRE-COVIN ARMANDO</creatorcontrib><creatorcontrib>HOURIGAN, RYAN</creatorcontrib><creatorcontrib>EISENBERG JOSH</creatorcontrib><title>Multi-modal model for dynamically responding to virtual characters</title><description>The disclosed embodiments relate to a method of controlling a virtual character (or "avatar") using a multi-modal model. The multi-modal model may process a variety of input information related to a user and process the input information using a plurality of internal models. The multi-modal model may incorporate the internal model to make trusted, such as emotional, responses through the virtual character. A link of a virtual character may be embedded on a web browser, and an avatar may be dynamically generated based on a selection of a user to interact with the virtual character. A report may be generated for the customer that provides insight with features of the user interacting with the virtual character associated with the customer. 所公开的实施例涉及一种使用多模态模型控制虚拟角色(或"化身")的方法。多模态模型可以处理与用户有关的多种输入信息并使用多个内部模型处理输入信息。多模态模型可以结合内部模型以通过虚拟角色做出可信且诸如情感的响应。虚拟角色的链接可以被嵌入在网络浏览器上,并且可以基于用户与虚拟角色交互的选择动态地生成化身。可以为客户生成报告,该报告提供关于与和客户相关联的虚拟角色交互的用户的特征的洞见。</description><subject>ACOUSTICS</subject><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2022</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHDyLc0pydTNzU9JzFEAkqk5Cmn5RQoplXmJuZnJiTk5lQpFqcUF-XkpmXnpCiX5CmWZRSWlQLXJGYlFicklqUXFPAysaYk5xam8UJqbQdHNNcTZQze1ID8eqDcxOTUvtSTe2c_Q0MTYwNjQ0MzRmBg1AGunMmg</recordid><startdate>20220408</startdate><enddate>20220408</enddate><creator>MCINTYRE-COVIN ARMANDO</creator><creator>HOURIGAN, RYAN</creator><creator>EISENBERG JOSH</creator><scope>EVB</scope></search><sort><creationdate>20220408</creationdate><title>Multi-modal model for dynamically responding to virtual characters</title><author>MCINTYRE-COVIN ARMANDO ; HOURIGAN, RYAN ; EISENBERG JOSH</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN114303116A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2022</creationdate><topic>ACOUSTICS</topic><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>MCINTYRE-COVIN ARMANDO</creatorcontrib><creatorcontrib>HOURIGAN, RYAN</creatorcontrib><creatorcontrib>EISENBERG JOSH</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>MCINTYRE-COVIN ARMANDO</au><au>HOURIGAN, RYAN</au><au>EISENBERG JOSH</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Multi-modal model for dynamically responding to virtual characters</title><date>2022-04-08</date><risdate>2022</risdate><abstract>The disclosed embodiments relate to a method of controlling a virtual character (or "avatar") using a multi-modal model. The multi-modal model may process a variety of input information related to a user and process the input information using a plurality of internal models. The multi-modal model may incorporate the internal model to make trusted, such as emotional, responses through the virtual character. A link of a virtual character may be embedded on a web browser, and an avatar may be dynamically generated based on a selection of a user to interact with the virtual character. A report may be generated for the customer that provides insight with features of the user interacting with the virtual character associated with the customer. 所公开的实施例涉及一种使用多模态模型控制虚拟角色(或"化身")的方法。多模态模型可以处理与用户有关的多种输入信息并使用多个内部模型处理输入信息。多模态模型可以结合内部模型以通过虚拟角色做出可信且诸如情感的响应。虚拟角色的链接可以被嵌入在网络浏览器上,并且可以基于用户与虚拟角色交互的选择动态地生成化身。可以为客户生成报告,该报告提供关于与和客户相关联的虚拟角色交互的用户的特征的洞见。</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN114303116A
source esp@cenet
subjects ACOUSTICS
CALCULATING
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title Multi-modal model for dynamically responding to virtual characters
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T19%3A50%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=MCINTYRE-COVIN%20ARMANDO&rft.date=2022-04-08&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN114303116A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true