Multistep Networks for Deformable Multimodal Medical Image Registration

We proposed neural networks for deformable multimodal medical image registration that use multiple steps and varying resolutions. The networks were trained jointly in an unsupervised manner with Mutual Information and Gradient L2 loss. By comparing the multistep neural networks to each other and to...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2024, Vol.12, p.82676-82692
Hauptverfasser: Strittmatter, Anika, Zollner, Frank G.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We proposed neural networks for deformable multimodal medical image registration that use multiple steps and varying resolutions. The networks were trained jointly in an unsupervised manner with Mutual Information and Gradient L2 loss. By comparing the multistep neural networks to each other and to a monostep/monoresolution network as a benchmark and the classical registration methods SimpleElastix and NiftyReg as a baseline, we investigated the impact of using multiple resolutions on the registration result. To assess the performance of the multistep networks, we used four three-dimensional multimodal datasets (a synthetic and an in-vivo liver dataset with CT and T1-weighted MR scans, an in-vivo kidney MR dataset with T1-weighted and T2-weighted MR scans and an in-vivo prostate MR dataset with T2-weighted and DWI MR scans). Experimental results showed that incorporating multiple steps and resolutions in a neural network leads to registration results with high structural similarity (NMI up to 0.33 ± 0.02, Dice up to 90.8 ± 3.1) and minimal image folding (|J \vert \le 0 : less than 0.5%), resulting in a medically plausible transformation, while maintaining a low registration time (
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3412216