A Novel and Powerful Dual-Stream Multi-Level Graph Convolution Network for Emotion Recognition

Emotion recognition enables machines to more acutely perceive and understand users' emotional states, thereby offering more personalized and natural interactive experiences. Given the regularity of the responses of brain activity to human cognitive processes, we propose a powerful and novel dua...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Sensors (Basel, Switzerland) Switzerland), 2024-11, Vol.24 (22), p.7377
Hauptverfasser: Hou, Guoqiang, Yu, Qiwen, Chen, Guang, Chen, Fan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Emotion recognition enables machines to more acutely perceive and understand users' emotional states, thereby offering more personalized and natural interactive experiences. Given the regularity of the responses of brain activity to human cognitive processes, we propose a powerful and novel dual-stream multi-level graph convolution network (DMGCN) with the ability to capture the hierarchies of connectivity between cerebral cortex neurons and improve computational efficiency. This consists of a hierarchical dynamic geometric interaction neural network (HDGIL) and multi-level feature fusion classifier (M2FC). First, the HDGIL diversifies representations by learning emotion-related representations in multi-level graphs. Subsequently, M2FC integrates advantages from methods for early and late feature fusion and enables the addition of more details to final representations from EEG samples. We conducted extensive experiments to validate the superiority of our model over numerous state-of-the-art (SOTA) baselines in terms of classification accuracy, the efficiency of graph embedding and information propagation, achieving accuracies of 98.73%, 95.97%, 72.74% and 94.89% for our model as well as increases of up to 0.59%, 0.32%, 2.24% and 3.17% over baselines on the DEAP-Arousal, DEAP-Valence, DEAP and SEED datasets, respectively. Additionally, these experiments demonstrated the effectiveness of each module for emotion recognition tasks.
ISSN:1424-8220
1424-8220
DOI:10.3390/s24227377