AUTONOMOUS GENERATION, DEPLOYMENT, AND PERSONALIZATION OF REAL-TIME INTERACTIVE DIGITAL AGENTS
A method includes receiving an input comprising multi-modal inputs such as text, audio, video, or context information from a client device associated with a user, assigning a task associated with the input to a server among a plurality of servers, determining a context response corresponding to the...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A method includes receiving an input comprising multi-modal inputs such as text, audio, video, or context information from a client device associated with a user, assigning a task associated with the input to a server among a plurality of servers, determining a context response corresponding to the input based on the input and interaction history between the computing system and the user, generating meta data specifying expressions, emotions, and non-verbal and verbal gestures associated with the context response by querying a trained behavior knowledge graph, generating media content output based on the determined context response and the generated meta data, the media content output comprising of text, audio, and visual information corresponding to the determined context response in the expressions, the emotions, and the non-verbal and verbal gestures specified by the meta data, sending instructions for presenting the generated media content output to the user to the client device. |
---|