FPSN-FNCC: an accurate and fast motion tracking algorithm in 3D ultrasound for image-guided interventions

The uncertain motions of a target caused by the breath, heartbeat and body drift of a patient can increase the target locating error during image-guided interventions, and that may cause additional surgery trauma. A surgery navigation system with accurate motion tracking is important for improving t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Physics in medicine & biology 2021-08, Vol.66 (15), p.155012, Article 155012
Hauptverfasser: He, Jishuai, Shen, Chunxu, Chen, Yao, Huang, Yibin, Wu, Jian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The uncertain motions of a target caused by the breath, heartbeat and body drift of a patient can increase the target locating error during image-guided interventions, and that may cause additional surgery trauma. A surgery navigation system with accurate motion tracking is important for improving the operation accuracy and reducing trauma. In this work, we propose an accurate and fast tracking algorithm in three-dimensional (3D) ultrasound (US) sequences for US-guided surgery to achieve moving object tracking. The idea of this algorithm is as follows. Firstly, feature pyramid architecture is introduced into a Siamese network to extract multiscale convolutional features. Secondly, to improve the network discriminative power and the robustness to ultrasonic noise and gain variation, we use the normalized cross correlation (NCC) to calculate the similarity between template block and search block. Thirdly, a fast NCC (FNCC) is proposed, which can perform the real-time tracking. Finally, a density peaks clustering approach is used to compensate the motion of the target and further improve the tracking accuracy. The proposed algorithm is evaluated on a CLUST dataset that includes 22 sets of 3D US sequences, and the mean error of 1.60 +/- 0.97 mm compared with manual annotations is obtained. After comparing with other published works, the results show that our algorithm achieves the comparable performance. The ablation study proves that the results benefit from the feature pyramid architecture and FNCC. These findings show that our algorithm may improve the motion tracking accuracy in image-guided interventions.
ISSN:0031-9155
1361-6560
DOI:10.1088/1361-6560/abffef