New modalities, new challenges - Annotating sketching and gaze data
One active line of research in the IUI community aims to build interfaces that combine multiple communication modalities to support more natural human-computer interaction. Multimodal interaction research relies heavily on the availability of carefully annotated data in various modalities. As a resu...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng ; tur |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | One active line of research in the IUI community aims to build interfaces that combine multiple communication modalities to support more natural human-computer interaction. Multimodal interaction research relies heavily on the availability of carefully annotated data in various modalities. As a result, many authors have suggested general-purpose tools for annotation. However, existing tools do not support annotation of a number of recently emerging modalities. In particular, annotation of pen and eye gaze data is not fully supported by existing annotation systems despite the increasing popularity of tablets and eye gaze-aware systems. This paper presents our efforts for designing and implementing a general-purpose annotator with a comprehensive support for a large number of modalities. |
---|---|
DOI: | 10.1109/SIU.2013.6531471 |