MAMF-GCN: Multi-scale adaptive multi-channel fusion deep graph convolutional network for predicting mental disorder
Existing diagnoses of mental disorders rely on symptoms, patient descriptions, and scales, which are not objective enough. We attempt to explore an objective diagnostic method on fMRI data. Graph neural networks (GNN) have been paid more attention recently because of their advantages in processing u...
Gespeichert in:
Veröffentlicht in: | Computers in biology and medicine 2022-09, Vol.148, p.105823-105823, Article 105823 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Existing diagnoses of mental disorders rely on symptoms, patient descriptions, and scales, which are not objective enough. We attempt to explore an objective diagnostic method on fMRI data. Graph neural networks (GNN) have been paid more attention recently because of their advantages in processing unstructured relational data, especially for fMRI data. However, how to deeply embed and well-integrate with different modalities and scales on GNN is still a challenge. Instead of reaching a high degree of fusion, existing GCN methods simply combine image and non-image data. Most graph convolutional network (GCN) models use shallow structures, making it challenging to learn about potential information. Furthermore, current graph construction approaches usually use a single specific brain atlas, limiting the analysis and results.
In this paper, a multi-scale adaptive multi-channel fusion deep graph convolutional network based on an attention mechanism (MAMF-GCN) is proposed to better integrate features of modalities and different atlas by exploiting multi-channel correlation. An encoder automatically combines one channel with non-imaging data to generate similarity weights between subjects using a similarity perception mechanism. Other channels generate multi-scale imaging features of fMRI data after processing in the different atlas. Multi-modal information is fused using an adaptive convolution module that applies a deep graph convolutional network (GCN) to extract information from richer hidden layers.
To demonstrate the effectiveness of our approach, we evaluate the performance of the proposed method on the Autism Brain Imaging Data Exchange (ABIDE) dataset and the Major Depressive Disorder (MDD) dataset. The experimental result shows that the proposed method outperforms many state-of-the-art methods in node classification performance. An extensive group of experiments on two disease prediction tasks demonstrates that the performance of the proposed MAMF-GCN on MDD/ABIDE dataset is improved by 3.37%–39.83% and 12.59%–32.92%, respectively. Moreover, our proposed method has also shown very effective performance in real-life clinical diagnosis. The comprehensive experiments demonstrate that our method is effective for node classification with brain disorders diagnosis.
The proposed MAMF-GCN method simultaneously extracts specific and common embeddings from the topology composed of multi-scale imaging features, phenotypic information, and their combinations, then |
---|---|
ISSN: | 0010-4825 1879-0534 |
DOI: | 10.1016/j.compbiomed.2022.105823 |