Automatic prostate segmentation using deep learning on clinically diverse 3D transrectal ultrasound images
Purpose Needle‐based procedures for diagnosing and treating prostate cancer, such as biopsy and brachytherapy, have incorporated three‐dimensional (3D) transrectal ultrasound (TRUS) imaging to improve needle guidance. Using these images effectively typically requires the physician to manually segmen...
Gespeichert in:
Veröffentlicht in: | Medical physics (Lancaster) 2020-06, Vol.47 (6), p.2413-2426 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Purpose
Needle‐based procedures for diagnosing and treating prostate cancer, such as biopsy and brachytherapy, have incorporated three‐dimensional (3D) transrectal ultrasound (TRUS) imaging to improve needle guidance. Using these images effectively typically requires the physician to manually segment the prostate to define the margins used for accurate registration, targeting, and other guidance techniques. However, manual prostate segmentation is a time‐consuming and difficult intraoperative process, often occurring while the patient is under sedation (biopsy) or anesthetic (brachytherapy). Minimizing procedure time with a 3D TRUS prostate segmentation method could provide physicians with a quick and accurate prostate segmentation, and allow for an efficient workflow with improved patient throughput to enable faster patient access to care. The purpose of this study was to develop a supervised deep learning‐based method to segment the prostate in 3D TRUS images from different facilities, generated using multiple acquisition methods and commercial ultrasound machine models to create a generalizable algorithm for needle‐based prostate cancer procedures.
Methods
Our proposed method for 3D segmentation involved prediction on two‐dimensional (2D) slices sampled radially around the approximate central axis of the prostate, followed by reconstruction into a 3D surface. A 2D U‐Net was modified, trained, and validated using images from 84 end‐fire and 122 side‐fire 3D TRUS images acquired during clinical biopsies and brachytherapy procedures. Modifications to the expansion section of the standard U‐Net included the addition of 50% dropouts and the use of transpose convolutions instead of standard upsampling followed by convolution to reduce overfitting and improve performance, respectively. Manual contours provided the annotations needed for the training, validation, and testing datasets, with the testing dataset consisting of 20 end‐fire and 20 side‐fire unseen 3D TRUS images. Since predicting with 2D images has the potential to lose spatial and structural information, comparisons to 3D reconstruction and optimized 3D networks including 3D V‐Net, Dense V‐Net, and High‐resolution 3D‐Net were performed following an investigation into different loss functions. An extended selection of absolute and signed error metrics were computed, including pixel map comparisons [dice similarity coefficient (DSC), recall, and precision], volume percent differences (VPD), mean surfa |
---|---|
ISSN: | 0094-2405 2473-4209 |
DOI: | 10.1002/mp.14134 |