Multi-modal interface in a voice-activated network
Systems and methods of the present technical solution enable a multi-modal interface for voice-based devices, such as digital assistants. The solution can enable a user to interact with video and other content through a touch interface and through voice commands. In addition to inputs such as stop a...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Systems and methods of the present technical solution enable a multi-modal interface for voice-based devices, such as digital assistants. The solution can enable a user to interact with video and other content through a touch interface and through voice commands. In addition to inputs such as stop and play, the present solution can also automatically generate annotations for displayed video files. From the annotations, the solution can identify one or more break points that are associated with different scenes, video portions, or how-to steps in the video. The digital assistant can receive input audio signal and parse the input audio signal to identify semantic entities within the input audio signal. The digital assistant can map the identified semantic entities to the annotations to select a portion of the video that corresponds to the users request in the input audio signal. |
---|