Diegraph: dual-branch information exchange graph convolutional network for deformable medical image registration
Deformable medical image registration (DMIR) is a crucial task in medical image analysis. Current learning-based approaches concatenate two images into two channels, then use convolution or transformer-based networks to extract features for modeling spatial correspondence. However, this approach mix...
Gespeichert in:
Veröffentlicht in: | Neural computing & applications 2023-11, Vol.35 (32), p.23631-23647 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deformable medical image registration (DMIR) is a crucial task in medical image analysis. Current learning-based approaches concatenate two images into two channels, then use convolution or transformer-based networks to extract features for modeling spatial correspondence. However, this approach mixes the features of the two images before searching for correspondence, making the spatial modeling process challenging. To solve this problem, this paper proposes a purely convolutional framework, DIEGraph, for DMIR. The proposed method first extracts features separately from the moving and fixed images using a dual-branch network. Then, it splits the features into patches and constructs a graph by connecting the nearest neighbors of these patches. Based on the graph representation, the proposed method uses an information exchange graph convolutional network module to exchange dual-branch information and model correspondence. Additionally, a dilated residual fusion module is deployed for feature fusion. We conducted a qualitative and quantitative evaluation on two 3D datasets. The proposed method achieved higher Dice scores than Transmorph by 0.9
%
and 3.8
%
on the two datasets, respectively, while maintaining a comparable voxel folding percentage. Ablation studies also verify the effectiveness of the components in the proposed model. |
---|---|
ISSN: | 0941-0643 1433-3058 |
DOI: | 10.1007/s00521-023-08979-y |