Graph Adapter of EEG Foundation Models for Parameter Efficient Fine Tuning
In diagnosing mental diseases from electroencephalography (EEG) data, neural network models such as Transformers have been employed to capture temporal dynamics. Additionally, it is crucial to learn the spatial relationships between EEG sensors, for which Graph Neural Networks (GNNs) are commonly us...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In diagnosing mental diseases from electroencephalography (EEG) data, neural
network models such as Transformers have been employed to capture temporal
dynamics. Additionally, it is crucial to learn the spatial relationships
between EEG sensors, for which Graph Neural Networks (GNNs) are commonly used.
However, fine-tuning large-scale complex neural network models simultaneously
to capture both temporal and spatial features increases computational costs due
to the more significant number of trainable parameters. It causes the limited
availability of EEG datasets for downstream tasks, making it challenging to
fine-tune large models effectively. We propose EEG-GraphAdapter (EGA), a
parameter-efficient fine-tuning (PEFT) approach to address these challenges.
EGA is integrated into pre-trained temporal backbone models as a GNN-based
module and fine-tuned itself alone while keeping the backbone model parameters
frozen. This enables the acquisition of spatial representations of EEG signals
for downstream tasks, significantly reducing computational overhead and data
requirements. Experimental evaluations on healthcare-related downstream tasks
of Major Depressive Disorder and Abnormality Detection demonstrate that our EGA
improves performance by up to 16.1% in the F1-score compared with the backbone
BENDR model. |
---|---|
DOI: | 10.48550/arxiv.2411.16155 |