Deep-learned placental vessel segmentation for intraoperative video enhancement in fetoscopic surgery

Introduction Twin-to-twin transfusion syndrome (TTTS) is a potentially lethal condition that affects pregnancies in which twins share a single placenta. The definitive treatment for TTTS is fetoscopic laser photocoagulation, a procedure in which placental blood vessels are selectively cauterized. Ch...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal for computer assisted radiology and surgery 2019-02, Vol.14 (2), p.227-235
Hauptverfasser: Sadda, Praneeth, Imamoglu, Metehan, Dombrowski, Michael, Papademetris, Xenophon, Bahtiyar, Mert O., Onofrey, John
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Introduction Twin-to-twin transfusion syndrome (TTTS) is a potentially lethal condition that affects pregnancies in which twins share a single placenta. The definitive treatment for TTTS is fetoscopic laser photocoagulation, a procedure in which placental blood vessels are selectively cauterized. Challenges in this procedure include difficulty in quickly identifying placental blood vessels due to the many artifacts in the endoscopic video that the surgeon uses for navigation. We propose using deep-learned segmentations of blood vessels to create masks that can be recombined with the original fetoscopic video frame in such a way that the location of placental blood vessels is discernable at a glance. Methods In a process approved by an institutional review board, intraoperative videos were acquired from ten fetoscopic laser photocoagulation surgeries performed at Yale New Haven Hospital. A total of 345 video frames were selected from these videos at regularly spaced time intervals. The video frames were segmented once by an expert human rater (a clinician) and once by a novice, but trained human rater (an undergraduate student). The segmentations were used to train a fully convolutional neural network of 25 layers. Results The neural network was able to produce segmentations with a high similarity to ground truth segmentations produced by an expert human rater (sensitivity = 92.15% ± 10.69%) and produced segmentations that were significantly more accurate than those produced by a novice human rater (sensitivity = 56.87% ± 21.64%; p  
ISSN:1861-6410
1861-6429
DOI:10.1007/s11548-018-1886-4