Improving mental dysfunction detection from EEG signals: Self-contrastive learning and multitask learning with transformers

Existing works relied on subjective interviews in clinical settings, which does not detect mental health problems at early stages. With the advancement in learning strategies, research are trying to find a better way for detecting mental disorders and dysfunctions at early stages. EEG signal measure...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Alexandria engineering journal 2024-11, Vol.106, p.52-59
Hauptverfasser: Basheer, Shakila, Aldehim, Ghadah, Alluhaidan, Ala Saleh, Sakri, Sapiah
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Existing works relied on subjective interviews in clinical settings, which does not detect mental health problems at early stages. With the advancement in learning strategies, research are trying to find a better way for detecting mental disorders and dysfunctions at early stages. EEG signal measurements is one of the prolific ways for identifying mental disorders and dysfunction in a non-invasive and ubiquitous way. However, the label scarcity and multiclass classification concerning EEG signal measurements has been a challenge for the research community that hinders the realization of automated mental dysfunction identification using EEG signals. Data imbalance is another issue that indirectly relates to the data scarcity issue. To tackle this challenge, we propose a novel Transformer-based Self-Contrastive and Multitask Learning (SCAM-Learning) framework for mental dysfunction classification using EEG signals. The SCAM-Learning framework uses Transformer networks, self-supervised contrastive learning paradigm, and multitask learning strategy to improve the classification performance. The multitask learning is accomplished by utilizing simple and complex data augmentation strategies to train the network for pretext task. The self-supervised contrastive learning helps in dealing with data and label scarcity issues. We also propose a novel cross contrastive loss that helps in improving interdependent correlation matrix for improving the classification performance. Our experimental results on a publicly available dataset reveal that the proposed method can achieve up to 11.89% performance gain in comparison to the existing state-of-the-art methods.
ISSN:1110-0168
DOI:10.1016/j.aej.2024.06.058