Multi-Modal Emotion Classification in Virtual Reality Using Reinforced Self-Training
Affective computing focuses on recognizing emotions using a combination of psychology, computer science, and biomedical engineering. With virtual reality (VR) becoming more widely accessible, affective computing has become increasingly important for supporting social interactions on online virtual p...
Gespeichert in:
Veröffentlicht in: | Journal of advanced computational intelligence and intelligent informatics 2023-09, Vol.27 (5), p.967-975 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Affective computing focuses on recognizing emotions using a combination of psychology, computer science, and biomedical engineering. With virtual reality (VR) becoming more widely accessible, affective computing has become increasingly important for supporting social interactions on online virtual platforms. However, accurately estimating a person’s emotional state in VR is challenging because it differs from real-world conditions, such as the unavailability of facial expressions. This research proposes a self-training method that uses unlabeled data and a reinforcement learning approach to select and label data more accurately. Experiments on a dataset of dialogues of VR players show that the proposed method achieved an accuracy of over 80% on dominance and arousal labels and outperformed previous techniques in the few-shot classification of emotions based on physiological signals. |
---|---|
ISSN: | 1343-0130 1883-8014 |
DOI: | 10.20965/jaciii.2023.p0967 |