A subject-independent portable emotion recognition system using synchrosqueezing wavelet transform maps of EEG signals and ResNet-18

•Develop a two-channel affective Brain-Computer Interface (aBCI) for EEG signal processing.•Employed the synchrosqueezing wavelet transform (SSWT) and ResNet18 for time–frequency mapping and emotion recognition.•Decision-making based on weighted average probabilities from ResNet18.•Highest average a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Biomedical signal processing and control 2024-04, Vol.90, p.105875, Article 105875
Hauptverfasser: Bagherzadeh, Sara, Norouzi, Mohammad Reza, Bahri Hampa, Sepideh, Ghasri, Amirhesam, Tolou Kouroshi, Pouya, Hosseininasab, Saman, Ghasem Zadeh, Mohammad Amin, Nasrabadi, Ali Motie
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Develop a two-channel affective Brain-Computer Interface (aBCI) for EEG signal processing.•Employed the synchrosqueezing wavelet transform (SSWT) and ResNet18 for time–frequency mapping and emotion recognition.•Decision-making based on weighted average probabilities from ResNet18.•Highest average accuracy achieved 77.75% from two common EEG channels (T7 and T8) for the SEED-IV, SEED-V, SEED-GER, and SEED-FRA databases. Designing a portable Brain-Computer Interface (aBCI) using EEG signals is challenging due to the numerous channels, though not all are vital for emotional recognition. We aimed to simplify this by creating a two-channel portable aBCI using advanced time-frequency analysis and deep learning. Our approach involved utilizing the time-frequency analysis named synchrosqueezing wavelet transform (SSWT), which provides better frequency resolution for fluctuations of EEG signal than common wavelet transform. Using the ResNet-18 Convolutional Neural Network, we fine-tuned for sadness and happiness classification. The two best channels were identified across four databases: SEED-IV, SEED-V, SEED-GER, and SEED-FRA, using the Leave-One-Subject-Out method. Finally, we achieved an average accuracy over sad and happy emotions using the SSWT-ResNet18 model of 76.66%, 78.12%, 81.25%, and 75.00% for the SEED-IV, SEED-V, SEED-GER, and SEED-FRA databases, respectively. Overall, our study demonstrates the potential for developing a rapid aBCI by utilizing a precise time–frequency method and deep learning technique from the least number of channels. Our approach has promising implications for future real-world applications in emotional recognition.
ISSN:1746-8094
1746-8108
DOI:10.1016/j.bspc.2023.105875