GANSER: A Self-Supervised Data Augmentation Framework for EEG-Based Emotion Recognition

Electroencephalography (EEG)-based affective computing has a scarcity problem. As a result, it is difficult to build effective, highly accurate and stable models using machine learning algorithms, especially deep learning models. Data augmentation has recently shown performance improvements in deep...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on affective computing 2023-07, Vol.14 (3), p.2048-2063
Hauptverfasser: Zhang, Zhi, Liu, Yan, Zhong, Sheng-hua
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Electroencephalography (EEG)-based affective computing has a scarcity problem. As a result, it is difficult to build effective, highly accurate and stable models using machine learning algorithms, especially deep learning models. Data augmentation has recently shown performance improvements in deep learning models with increased accuracy, stability and reduced overfitting. In this paper, we propose a novel data augmentation framework, named the generative adversarial network-based self-supervised data augmentation (GANSER). As the first to combine adversarial training with self-supervised learning for EEG-based emotion recognition, the proposed framework generates high-quality and high-diversity simulated EEG samples. In particular, we utilize adversarial training to learn an EEG generator and force the generated EEG signals to approximate the distribution of real samples, ensuring the quality of the augmented samples. A transformation operation is employed to mask parts of the EEG signals and force the generator to synthesize potential EEG signals based on the unmasked parts to produce a wide variety of samples. A masking possibility during transformation is introduced as prior knowledge to generalize the classifier for the augmented sample space. Finally, numerous experiments demonstrate that our proposed method can improve emotion recognition with an increase in performance and achieve state-of-the-art results.
ISSN:1949-3045
1949-3045
DOI:10.1109/TAFFC.2022.3170369