A semi-automatic annotation tool for unobtrusive gesture analysis
In a variety of research fields, including linguistics, human-computer interaction research, psychology, sociology and behavioral studies, there is a growing interest in the role of gestural behavior related to speech and other modalities. The analysis of multimodal communication requires high-quali...
Gespeichert in:
Veröffentlicht in: | Language Resources and Evaluation 2018-06, Vol.52 (2), p.433-460 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In a variety of research fields, including linguistics, human-computer interaction research, psychology, sociology and behavioral studies, there is a growing interest in the role of gestural behavior related to speech and other modalities. The analysis of multimodal communication requires high-quality video data and detailed annotation of the different semiotic resources under scrutiny. In the majority of cases, the annotation of hand position, hand motion, gesture type, etc. is done manually, which is a time-consuming enterprise requiring multiple annotators and substantial resources. In this paper we present a semi-automatic alternative, in which the focus lies on minimizing the manual workload while guaranteeing highly accurate annotations.
First, we discuss our approach, which consists of several processing steps such as identifying the hands in images, calculating motion of the hands, segmenting the recording in gesture and non-gesture events, etc. Second, we validate our approach against two existing corpora in terms of accuracy and usefulness. The proposed approach is designed to provide annotations according to the McNeill (1992) gesture space and the output is compatible with annotation tools such as ELAN. |
---|---|
ISSN: | 1574-020X |