A multi-slice attention fusion and multi-view personalized fusion lightweight network for Alzheimer's disease diagnosis
Alzheimer's disease (AD) is a type of neurological illness that significantly impacts individuals' daily lives. In the intelligent diagnosis of AD, 3D networks require larger computational resources and storage space for training the models, leading to increased model complexity and traini...
Gespeichert in:
Veröffentlicht in: | BMC medical imaging 2024-09, Vol.24 (1), p.258-12, Article 258 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Alzheimer's disease (AD) is a type of neurological illness that significantly impacts individuals' daily lives. In the intelligent diagnosis of AD, 3D networks require larger computational resources and storage space for training the models, leading to increased model complexity and training time. On the other hand, 2D slices analysis may overlook the 3D structural information of MRI and can result in information loss.
We propose a multi-slice attention fusion and multi-view personalized fusion lightweight network for automated AD diagnosis. It incorporates a multi-branch lightweight backbone to extract features from sagittal, axial, and coronal view of MRI, respectively. In addition, we introduce a novel multi-slice attention fusion module, which utilizes a combination of global and local channel attention mechanism to ensure consistent classification across multiple slices. Additionally, a multi-view personalized fusion module is tailored to assign appropriate weights to the three views, taking into account the varying significance of each view in achieving accurate classification results. To enhance the performance of the multi-view personalized fusion module, we utilize a label consistency loss to guide the model's learning process. This encourages the acquisition of more consistent and stable representations across all three views.
The suggested strategy is efficient in lowering the number of parameters and FLOPs, with only 3.75M and 4.45G respectively, and accuracy improved by 10.5% to 14% in three tasks. Moreover, in the classification tasks of AD vs. CN, AD vs. MCI and MCI vs. CN, the accuracy of the proposed method is 95.63%, 86.88% and 85.00%, respectively, which is superior to the existing methods.
The results show that the proposed approach not only excels in resource utilization, but also significantly outperforms the four comparison methods in terms of accuracy and sensitivity, particularly in detecting early-stage AD lesions. It can precisely capture and accurately identify subtle brain lesions, providing crucial technical support for early intervention and treatment. |
---|---|
ISSN: | 1471-2342 1471-2342 |
DOI: | 10.1186/s12880-024-01429-8 |