Emotion Recognition From EEG Signals of Hearing-Impaired People Using Stacking Ensemble Learning Framework Based on a Novel Brain Network

Emotion recognition based on electroencephalography (EEG) signals has become an interesting research topic in the field of neuroscience, psychology, neural engineering, and computer science. However, the existing studies are mainly focused on normal or depression subjects, and few reports on hearing...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE sensors journal 2021-10, Vol.21 (20), p.23245-23255
Hauptverfasser: Kang, Qiaoju, Gao, Qiang, Song, Yu, Tian, Zekun, Yang, Yi, Mao, Zemin, Dong, Enzeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Emotion recognition based on electroencephalography (EEG) signals has become an interesting research topic in the field of neuroscience, psychology, neural engineering, and computer science. However, the existing studies are mainly focused on normal or depression subjects, and few reports on hearing-impaired subjects. In this work, we have collected the EEG signals of 15 hearing-impaired subjects for categorizing three types of emotions (positive, neutral, and negative). To study the differences in functional connectivity between normal and hearing-impaired subjects under different emotional states, a novel brain network stacking ensemble learning framework was proposed. The phase-locking value (PLV) was utilized to calculate the correlation between EEG channels, and then we constructed a brain network using double thresholds. The spatial features of the brain network were extracted from the perspectives of local differentiation and global integration. In addition, the stacking ensemble learning framework was used to classify the fused features. To evaluate the proposed model, extensive experiments were carried out on the SEED dataset, and the result shows that the proposed method achieved superior performance than state-of-the-art models, in which the average classification accuracy is 0.955 (std: 0.052). In addition, the experimental results of hearing-impaired emotion recognition show that the average classification accuracy is 0.984 (std: 0.005). Finally, we investigated the activation patterns to reveal important brain regions and inter-channel relations about emotion recognition.
ISSN:1530-437X
1558-1748
DOI:10.1109/JSEN.2021.3108471