Novel multimodal emotion detection method using Electroencephalogram and Electrocardiogram signals

•A novel Deep learning-based approach to reconstruct the EEG signal while preserving its morphology by removing the EOG artifacts.•Provide comprehensive multimodal performance analysis of EEG, ECG and PPG in detecting emotions.•Investigate a large number of time, frequency, and time–frequency featur...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Biomedical signal processing and control 2024-06, Vol.92, p.106002, Article 106002
Hauptverfasser: Saha, Purnata, Ansaruddin Kunju, Ali K., Majid, Molla E., Bin Abul Kashem, Saad, Nashbat, Mohammad, Ashraf, Azad, Hasan, Mazhar, Khandakar, Amith, Shafayet Hossain, Md, Alqahtani, Abdulrahman, Chowdhury, Muhammad E.H.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•A novel Deep learning-based approach to reconstruct the EEG signal while preserving its morphology by removing the EOG artifacts.•Provide comprehensive multimodal performance analysis of EEG, ECG and PPG in detecting emotions.•Investigate a large number of time, frequency, and time–frequency features for emotion recognition.•Novel multimodal network for emotion recognition outperforming the state-of-the-art reported performance. Emotion Recognition Systems (ERS) play a pivotal role in facilitating naturalistic Human-Machine Interactions (HMI). The research has utilized a dataset with diverse physiological signals, including Electroencephalogram (EEG), Photoplethysmography (PPG), and Electrocardiogram (ECG), to detect emotions evoked by video stimuli. The study has addressed challenges with EEG data, particularly prefrontal channels contaminated by eye blink artifacts. To tackle this, a novel 1D deep learning model, MultiResUNet3p, effectively generated clean EEG signals. Extensive features have been extracted from each modality (TD, FD, TFD), and the study identified that combining 112 features from EEG and ECG achieved the highest accuracy. The emotion classification task encompassed six emotions, and the model demonstrates outstanding performance with 96.12% accuracy in binary classification (Positive vs. Negative) and 94.25% accuracy in a complex multiclass classification of six emotions (Happy, Anger, Disgust, Fear, Neutral, and Sad). This research underscores the potential of integrating multiple physiological signals and advanced techniques to significantly improve emotion recognition accuracy, particularly in real-world scenarios involving naturalistic Human-Machine Interactions (HMI).
ISSN:1746-8094
1746-8108
DOI:10.1016/j.bspc.2024.106002