VISUALLY REPRESENTING RELATIONSHIPS IN AN EXTENDED REALITY ENVIRONMENT

Techniques are described herein that enable a user to provide speech inputs to control an extended reality environment, where relationships between terms in a speech input are represented in three dimensions (3D) in the extended reality environment. For example, a language processing component deter...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Solanki, Saransh, McCracken, Sean, Koh, Ken Brian
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Techniques are described herein that enable a user to provide speech inputs to control an extended reality environment, where relationships between terms in a speech input are represented in three dimensions (3D) in the extended reality environment. For example, a language processing component determines a semantic meaning of the speech input, and identifies terms in the speech input based on the semantic meaning. A 3D relationship component generates a 3D representation of a relationship between the terms and provides the 3D representation to a computing device for display. A 3D representation may include a modification to an object in an extended reality environment, or a 3D representation of a concepts and sub-concepts in a mind map in an extended reality environment, for example. The 3D relationship component may generate a searchable timeline using the terms provided in the speech input and a recording of an extended reality session.