BNU-LSVED 2.0: Spontaneous multimodal student affect database with multi-dimensional labels

In college classrooms, large quantities of digital-media data showing students’ affective behaviors are continuously captured by cameras on a daily basis. To provide a bench mark for affect recognition using these big data collections, in this paper we propose the first large-scale spontaneous and m...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Signal processing. Image communication 2017-11, Vol.59, p.168-181
Hauptverfasser: Wei, Qinglan, Sun, Bo, He, Jun, Yu, Lejun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In college classrooms, large quantities of digital-media data showing students’ affective behaviors are continuously captured by cameras on a daily basis. To provide a bench mark for affect recognition using these big data collections, in this paper we propose the first large-scale spontaneous and multimodal student affect database. All videos in our database were selected from daily big data recordings. The recruited subjects extracted one-person image sequences of their own affective behaviors, and then they made affect annotations under standard rules set beforehand. Ultimately, we have collected 2117 image sequences with 11 types of students’ affective behaviors in a variety of classes. The Beijing Normal University Large-scale Spontaneous Visual Expression Database version 2.0 (BNU-LSVED2.0) is an extension database of our previous BNU-LSVED1.0 and it has a number of new characteristics. The nonverbal behaviors and emotions in the new version database are more spontaneous since all image sequences are from the recording videos recorded in actual classes, rather than of behaviors stimulated by induction videos. Moreover, it includes a greater variety of affective behaviors, from which can be inferred students’ learning status during classes; these behaviors include facial expressions, eye movements, head postures, body movements, and gestures. In addition, instead of providing only categorical emotion labels, the new version also provides affective behavior labels and multi-dimensional Pleasure–Arousal–Dominance (PAD) labels that have been assigned to the image sequences. Both the detailed subjective descriptions and the statistical analyses of the self-annotation results demonstrate the reliability and the effectiveness of the multi-dimensional labels in the database. •First large student affect database recorded in classroom environments.•A variety of multimodal affective behaviors.•Annotated with both categorical and PAD emotional labels.•Annotation reliability is demonstrated through detailed analyses.
ISSN:0923-5965
1879-2677
DOI:10.1016/j.image.2017.08.012