ContraReg: Contrastive Learning of Multi-modality Unsupervised Deformable Image Registration
Establishing voxelwise semantic correspondence across distinct imaging modalities is a foundational yet formidable computer vision task. Current multi-modality registration techniques maximize hand-crafted inter-domain similarity functions, are limited in modeling nonlinear intensity-relationships a...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Establishing voxelwise semantic correspondence across distinct imaging
modalities is a foundational yet formidable computer vision task. Current
multi-modality registration techniques maximize hand-crafted inter-domain
similarity functions, are limited in modeling nonlinear intensity-relationships
and deformations, and may require significant re-engineering or underperform on
new tasks, datasets, and domain pairs. This work presents ContraReg, an
unsupervised contrastive representation learning approach to multi-modality
deformable registration. By projecting learned multi-scale local patch features
onto a jointly learned inter-domain embedding space, ContraReg obtains
representations useful for non-rigid multi-modality alignment. Experimentally,
ContraReg achieves accurate and robust results with smooth and invertible
deformations across a series of baselines and ablations on a neonatal T1-T2
brain MRI registration task with all methods validated over a wide range of
deformation regularization strengths. |
---|---|
DOI: | 10.48550/arxiv.2206.13434 |