MSTGC: Multi-Channel Spatio-Temporal Graph Convolution Network for Multi-Modal Brain Networks Fusion

Multi-modal brain networks characterize the complex connectivities among different brain regions from structure and function aspects, which have been widely used in the analysis of brain diseases. Although many multi-modal brain network fusion methods have been proposed, most of them are unable to e...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on neural systems and rehabilitation engineering 2023-01, Vol.31, p.1-1
Hauptverfasser: Xu, Ruting, Zhu, Qi, Li, Shengrong, Hou, Zhenghua, Shao, Wei, Zhang, Daoqiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Multi-modal brain networks characterize the complex connectivities among different brain regions from structure and function aspects, which have been widely used in the analysis of brain diseases. Although many multi-modal brain network fusion methods have been proposed, most of them are unable to effectively extract the spatio-temporal topological characteristics of brain network while fusing different modalities. In this paper, we develop an adaptive multi-channel graph convolution network (GCN) fusion framework with graph contrast learning, which not only can effectively mine both the complementary and discriminative features of multi-modal brain networks, but also capture the dynamic characteristics and the topological structure of brain networks. Specifically, we first divide ROI-based series signals into multiple overlapping time windows, and construct the dynamic brain network representation based on these windows. Second, we adopt adaptive multi-channel GCN to extract the spatial features of the multi-modal brain networks with contrastive constraints, including multi-modal fusion InfoMax and inter-channel InfoMin. These two constraints are designed to extract the complementary information among modalities and specific information within a single modality. Moreover, two stacked long short-term memory units are utilized to capture the temporal information transferring across time windows. Finally, the extracted spatio-temporal features are fused, and multilayer perceptron (MLP) is used to realize multi-modal brain network prediction. The experiment on the epilepsy dataset shows that the proposed method outperforms several state-of-the-art methods in the diagnosis of brain diseases.
ISSN:1534-4320
1558-0210
DOI:10.1109/TNSRE.2023.3275608