Emotion recognition using spatial-temporal EEG features through convolutional graph attention network
Constructing an efficient human emotion recognition model based on electroencephalogram (EEG) signals is significant for realizing emotional brain-computer interaction and improving machine intelligence. In this paper, we present a spatial-temporal feature fused convolutional graph attention network...
Gespeichert in:
Veröffentlicht in: | Journal of neural engineering 2023-02, Vol.20 (1), p.16046 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Constructing an efficient human emotion recognition model based on electroencephalogram (EEG) signals is significant for realizing emotional brain-computer interaction and improving machine intelligence.
In this paper, we present a spatial-temporal feature fused convolutional graph attention network (STFCGAT) model based on multi-channel EEG signals for human emotion recognition. First, we combined the single-channel differential entropy (DE) feature with the cross-channel functional connectivity (FC) feature to extract both the temporal variation and spatial topological information of EEG. After that, a novel convolutional graph attention network was used to fuse the DE and FC features and further extract higher-level graph structural information with sufficient expressive power for emotion recognition. Furthermore, we introduced a multi-headed attention mechanism in graph neural networks to improve the generalization ability of the model.
We evaluated the emotion recognition performance of our proposed model on the public SEED and DEAP datasets, which achieved a classification accuracy of 99.11% ± 0.83% and 94.83% ± 3.41% in the subject-dependent and subject-independent experiments on the SEED dataset, and achieved an accuracy of 91.19% ± 1.24% and 92.03% ± 4.57% for discrimination of arousal and valence in subject-independent experiments on DEAP dataset. Notably, our model achieved state-of-the-art performance on cross-subject emotion recognition tasks for both datasets. In addition, we gained insight into the proposed frame through both the ablation experiments and the analysis of spatial patterns of FC and DE features.
All these results prove the effectiveness of the STFCGAT architecture for emotion recognition and also indicate that there are significant differences in the spatial-temporal characteristics of the brain under different emotional states. |
---|---|
ISSN: | 1741-2560 1741-2552 |
DOI: | 10.1088/1741-2552/acb79e |