Occlusion‐robust markerless surgical instrument pose estimation
The estimation of the pose of surgical instruments is important in Robot‐assisted Minimally Invasive Surgery (RMIS) to assist surgical navigation and enable autonomous robotic task execution. The performance of current instrument pose estimation methods deteriorates significantly in the presence of...
Gespeichert in:
Veröffentlicht in: | Healthcare technology letters 2024-12, Vol.11 (6), p.327-335 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The estimation of the pose of surgical instruments is important in Robot‐assisted Minimally Invasive Surgery (RMIS) to assist surgical navigation and enable autonomous robotic task execution. The performance of current instrument pose estimation methods deteriorates significantly in the presence of partial tool visibility, occlusions, and changes in the surgical scene. In this work, a vision‐based framework is proposed for markerless estimation of the 6DoF pose of surgical instruments. To deal with partial instrument visibility, a keypoint object representation is used and stable and accurate instrument poses are computed using a PnP solver. To boost the learning process of the model under occlusion, a new mask‐based data augmentation approach has been proposed. To validate the model, a dataset for instrument pose estimation with highly accurate ground truth data has been generated using different surgical robotic instruments. The proposed network can achieve submillimeter accuracy and the experimental results verify its generalisability to different shapes of occlusion.
This research addresses the challenge of estimating the pose of surgical instruments in Robot‐assisted Minimally Invasive Surgery (RMIS), which is crucial for surgical navigation and autonomous task execution. Existing methods struggle with accuracy when visibility is limited by occlusions or scene changes. To improve performance, the study introduces a vision‐based framework for markerless, 6DoF pose estimation of surgical instruments. It uses a keypoint‐based object representation and a PnP solver to accurately compute poses, even with partial visibility. A novel mask‐based data augmentation technique is proposed to enhance model learning under occlusions. It achieves submillimeter accuracy and demonstrates robust generalisability across different occlusion scenarios. |
---|---|
ISSN: | 2053-3713 2053-3713 |
DOI: | 10.1049/htl2.12100 |