Ultrasound prostate segmentation based on multidirectional deeply supervised V‐Net

Purpose Transrectal ultrasound (TRUS) is a versatile and real‐time imaging modality that is commonly used in image‐guided prostate cancer interventions (e.g., biopsy and brachytherapy). Accurate segmentation of the prostate is key to biopsy needle placement, brachytherapy treatment planning, and mot...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical physics (Lancaster) 2019-07, Vol.46 (7), p.3194-3206
Hauptverfasser: Lei, Yang, Tian, Sibo, He, Xiuxiu, Wang, Tonghe, Wang, Bo, Patel, Pretesh, Jani, Ashesh B., Mao, Hui, Curran, Walter J., Liu, Tian, Yang, Xiaofeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Purpose Transrectal ultrasound (TRUS) is a versatile and real‐time imaging modality that is commonly used in image‐guided prostate cancer interventions (e.g., biopsy and brachytherapy). Accurate segmentation of the prostate is key to biopsy needle placement, brachytherapy treatment planning, and motion management. Manual segmentation during these interventions is time‐consuming and subject to inter‐ and intraobserver variation. To address these drawbacks, we aimed to develop a deep learning‐based method which integrates deep supervision into a three‐dimensional (3D) patch‐based V‐Net for prostate segmentation. Methods and materials We developed a multidirectional deep‐learning‐based method to automatically segment the prostate for ultrasound‐guided radiation therapy. A 3D supervision mechanism is integrated into the V‐Net stages to deal with the optimization difficulties when training a deep network with limited training data. We combine a binary cross‐entropy (BCE) loss and a batch‐based Dice loss into the stage‐wise hybrid loss function for a deep supervision training. During the segmentation stage, the patches are extracted from the newly acquired ultrasound image as the input of the well‐trained network and the well‐trained network adaptively labels the prostate tissue. The final segmented prostate volume is reconstructed using patch fusion and further refined through a contour refinement processing. Results Forty‐four patients' TRUS images were used to test our segmentation method. Our segmentation results were compared with the manually segmented contours (ground truth). The mean prostate volume Dice similarity coefficient (DSC), Hausdorff distance (HD), mean surface distance (MSD), and residual mean surface distance (RMSD) were 0.92 ± 0.03, 3.94 ± 1.55, 0.60 ± 0.23, and 0.90 ± 0.38 mm, respectively. Conclusion We developed a novel deeply supervised deep learning‐based approach with reliable contour refinement to automatically segment the TRUS prostate, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for diagnostic and therapeutic applications in prostate cancer.
ISSN:0094-2405
2473-4209
DOI:10.1002/mp.13577