A markerless automatic deformable registration framework for augmented reality navigation of laparoscopy partial nephrectomy

Purpose Video see-through augmented reality (VST-AR) navigation for laparoscopic partial nephrectomy (LPN) can enhance intraoperative perception of surgeons by visualizing surgical targets and critical structures of the kidney tissue. Image registration is the main challenge in the procedure. Existi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal for computer assisted radiology and surgery 2019-08, Vol.14 (8), p.1285-1294
Hauptverfasser: Zhang, Xiaohui, Wang, Junchen, Wang, Tianmiao, Ji, Xuquan, Shen, Yu, Sun, Zhen, Zhang, Xuebin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Purpose Video see-through augmented reality (VST-AR) navigation for laparoscopic partial nephrectomy (LPN) can enhance intraoperative perception of surgeons by visualizing surgical targets and critical structures of the kidney tissue. Image registration is the main challenge in the procedure. Existing registration methods in laparoscopic navigation systems suffer from limitations such as manual alignment, invasive external marker fixation, relying on external tracking devices with bulky tracking sensors and lack of deformation compensation. To address these issues, we present a markerless automatic deformable registration framework for LPN VST-AR navigation. Method Dense stereo matching and 3D reconstruction, automatic segmentation and surface stitching are combined to obtain a larger dense intraoperative point cloud of the renal surface. A coarse-to-fine deformable registration is performed to achieve a precise automatic registration between the intraoperative point cloud and the preoperative model using the iterative closest point algorithm followed by the coherent point drift algorithm. Kidney phantom experiments and in vivo experiments were performed to evaluate the accuracy and effectiveness of our approach. Results The average segmentation accuracy rate of the automatic segmentation was 94.9%. The mean target registration error of the phantom experiments was found to be 1.28 ± 0.68 mm (root mean square error). In vivo experiments showed that tumor location was identified successfully by superimposing the tumor model on the laparoscopic view. Conclusion Experimental results have demonstrated that the proposed framework could accurately overlay comprehensive preoperative models on deformable soft organs automatically in a manner of VST-AR without using extra intraoperative imaging modalities and external tracking devices, as well as its potential clinical use.
ISSN:1861-6410
1861-6429
DOI:10.1007/s11548-019-01974-6