CrossConvPyramid: Deep Multimodal Fusion for Epileptic Magnetoencephalography Spike Detection
Magnetoencephalography (MEG) is a vital non-invasive tool for epilepsy analysis, as it captures high-resolution signals that reflect changes in brain activity over time. The automated detection of epileptic spikes within these signals can significantly reduce the labor and time required for manual a...
Gespeichert in:
Veröffentlicht in: | IEEE journal of biomedical and health informatics 2025-02, p.1-12 |
---|---|
Hauptverfasser: | , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Magnetoencephalography (MEG) is a vital non-invasive tool for epilepsy analysis, as it captures high-resolution signals that reflect changes in brain activity over time. The automated detection of epileptic spikes within these signals can significantly reduce the labor and time required for manual annotation of MEG recording data, thereby aiding clinicians in identifying epileptogenic foci and evaluating treatment prognosis. Research in this domain often utilizes the raw, multi-channel signals from MEG scans for spike detection, commonly neglecting the multi-channel spiking patterns from spatially adjacent channels. Moreover, epileptic spikes share considerable morphological similarities with artifact signals within the recordings, posing a challenge for models to differentiate between the two. In this paper, we introduce a multimodal fusion framework that addresses these two challenges collectively. Instead of relying solely on the signal recordings, our framework also mines knowledge from their corresponding topography-map images, which encapsulate the spatial context and amplitude distribution of the input signals. To facilitate more effective data fusion, we present a novel multimodal feature fusion technique called CrossConvPyramid, built upon a convolutional pyramid architecture augmented by an attention mechanism. It initially employs cross-attention and a convolutional pyramid to encode inter-modal correlations within the intermediate features extracted by individual unimodal networks. Subsequently, it utilizes a self-attention mechanism to refine and select the most salient features from both inter-modal and unimodal features, specifically tailored for the spike classification task. Our method achieved the average F1 scores of 92.88% and 95.23% across two distinct real-world MEG datasets from separate centers, respectively outperforming the current state-of-the-art by 2.31% and 0.88%. We plan to release the code on GitHub later. |
---|---|
ISSN: | 2168-2194 2168-2208 |
DOI: | 10.1109/JBHI.2025.3538582 |