Multi-modal model for dynamically responding to virtual characters
The disclosed embodiments relate to a method of controlling a virtual character (or "avatar") using a multi-modal model. The multi-modal model may process a variety of input information related to a user and process the input information using a plurality of internal models. The multi-moda...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Patent |
Sprache: | chi ; eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The disclosed embodiments relate to a method of controlling a virtual character (or "avatar") using a multi-modal model. The multi-modal model may process a variety of input information related to a user and process the input information using a plurality of internal models. The multi-modal model may incorporate the internal model to make trusted, such as emotional, responses through the virtual character. A link of a virtual character may be embedded on a web browser, and an avatar may be dynamically generated based on a selection of a user to interact with the virtual character. A report may be generated for the customer that provides insight with features of the user interacting with the virtual character associated with the customer.
所公开的实施例涉及一种使用多模态模型控制虚拟角色(或"化身")的方法。多模态模型可以处理与用户有关的多种输入信息并使用多个内部模型处理输入信息。多模态模型可以结合内部模型以通过虚拟角色做出可信且诸如情感的响应。虚拟角色的链接可以被嵌入在网络浏览器上,并且可以基于用户与虚拟角色交互的选择动态地生成化身。可以为客户生成报告,该报告提供关于与和客户相关联的虚拟角色交互的用户的特征的洞见。 |
---|