Dynamic 2-D/3-D Rigid Registration Framework Using Point-To-Plane Correspondence Model
In image-guided interventional procedures, live 2-D X-ray images can be augmented with preoperative 3-D computed tomography or MRI images to provide planning landmarks and enhanced spatial perception. An accurate alignment between the 3-D and 2-D images is a prerequisite for fusion applications. Thi...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on medical imaging 2017-09, Vol.36 (9), p.1939-1954 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In image-guided interventional procedures, live 2-D X-ray images can be augmented with preoperative 3-D computed tomography or MRI images to provide planning landmarks and enhanced spatial perception. An accurate alignment between the 3-D and 2-D images is a prerequisite for fusion applications. This paper presents a dynamic rigid 2-D/3-D registration framework, which measures the local 3-D-to-2-D misalignment and efficiently constrains the update of both planar and non-planar 3-D rigid transformations using a novel point-to-plane correspondence model. In the simulation evaluation, the proposed method achieved a mean 3-D accuracy of 0.07 mm for the head phantom and 0.05 mm for the thorax phantom using single-view X-ray images. In the evaluation on dynamic motion compensation, our method significantly increases the accuracy comparing with the baseline method. The proposed method is also evaluated on a publicly-available clinical angiogram data set with "gold-standard" registrations. The proposed method achieved a mean 3-D accuracy below 0.8 mm and a mean 2-D accuracy below 0.3 mm using single-view X-ray images. It outperformed the state-of-the-art methods in both accuracy and robustness in single-view registration. The proposed method is intuitive, generic, and suitable for both initial and dynamic registration scenarios. |
---|---|
ISSN: | 0278-0062 1558-254X |
DOI: | 10.1109/TMI.2017.2702100 |