Image registration is a geometric deep learning task
Data-driven deformable image registration methods predominantly rely on operations that process grid-like inputs. However, applying deformable transformations to an image results in a warped space that deviates from a rigid grid structure. Consequently, data-driven approaches with sequential deforma...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Data-driven deformable image registration methods predominantly rely on
operations that process grid-like inputs. However, applying deformable
transformations to an image results in a warped space that deviates from a
rigid grid structure. Consequently, data-driven approaches with sequential
deformations have to apply grid resampling operations between each deformation
step. While artifacts caused by resampling are negligible in high-resolution
images, the resampling of sparse, high-dimensional feature grids introduces
errors that affect the deformation modeling process. Taking inspiration from
Lagrangian reference frames of deformation fields, our work introduces a novel
paradigm for data-driven deformable image registration that utilizes geometric
deep-learning principles to model deformations without grid requirements.
Specifically, we model image features as a set of nodes that freely move in
Euclidean space, update their coordinates under graph operations, and
dynamically readjust their local neighborhoods. We employ this formulation to
construct a multi-resolution deformable registration model, where deformation
layers iteratively refine the overall transformation at each resolution without
intermediate resampling operations on the feature grids. We investigate our
method's ability to fully deformably capture large deformations across a number
of medical imaging registration tasks. In particular, we apply our approach
(GeoReg) to the registration of inter-subject brain MR images and inhale-exhale
lung CT images, showing on par performance with the current state-of-the-art
methods. We believe our contribution open up avenues of research to reduce the
black-box nature of current learned registration paradigms by explicitly
modeling the transformation within the architecture. |
---|---|
DOI: | 10.48550/arxiv.2412.13294 |