Deep convolutional neural network for segmentation of thoracic organs‐at‐risk using cropped 3D images

Purpose Automatic segmentation of organs‐at‐risk (OARs) is a key step in radiation treatment planning to reduce human efforts and bias. Deep convolutional neural networks (DCNN) have shown great success in many medical image segmentation applications but there are still challenges in dealing with la...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical physics (Lancaster) 2019-05, Vol.46 (5), p.2169-2180
Hauptverfasser: Feng, Xue, Qing, Kun, Tustison, Nicholas J., Meyer, Craig H., Chen, Quan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Purpose Automatic segmentation of organs‐at‐risk (OARs) is a key step in radiation treatment planning to reduce human efforts and bias. Deep convolutional neural networks (DCNN) have shown great success in many medical image segmentation applications but there are still challenges in dealing with large 3D images for optimal results. The purpose of this study is to develop a novel DCNN method for thoracic OARs segmentation using cropped 3D images. Methods To segment the five organs (left and right lungs, heart, esophagus and spinal cord) from the thoracic CT scans, preprocessing to unify the voxel spacing and intensity was first performed, a 3D U‐Net was then trained on resampled thoracic images to localize each organ, then the original images were cropped to only contain one organ and served as the input to each individual organ segmentation network. The segmentation maps were then merged to get the final results. The network structures were optimized for each step, as well as the training and testing strategies. A novel testing augmentation with multiple iterations of image cropping was used. The networks were trained on 36 thoracic CT scans with expert annotations provided by the organizers of the 2017 AAPM Thoracic Auto‐segmentation Challenge and tested on the challenge testing dataset as well as a private dataset. Results The proposed method earned second place in the live phase of the challenge and first place in the subsequent ongoing phase using a newly developed testing augmentation approach. It showed superior‐than‐human performance on average in terms of Dice scores (spinal cord: 0.893 ± 0.044, right lung: 0.972 ± 0.021, left lung: 0.979 ± 0.008, heart: 0.925 ± 0.015, esophagus: 0.726 ± 0.094), mean surface distance (spinal cord: 0.662 ± 0.248 mm, right lung: 0.933 ± 0.574 mm, left lung: 0.586 ± 0.285 mm, heart: 2.297 ± 0.492 mm, esophagus: 2.341 ± 2.380 mm) and 95% Hausdorff distance (spinal cord: 1.893 ± 0.627 mm, right lung: 3.958 ± 2.845 mm, left lung: 2.103 ± 0.938 mm, heart: 6.570 ± 1.501 mm, esophagus: 8.714 ± 10.588 mm). It also achieved good performance in the private dataset and reduced the editing time to 7.5 min per patient following automatic segmentation. Conclusions The proposed DCNN method demonstrated good performance in automatic OAR segmentation from thoracic CT scans. It has the potential for eventual clinical adoption of deep learning in radiation treatment planning due to improved accuracy and reduced cost for OAR segmentati
ISSN:0094-2405
2473-4209
DOI:10.1002/mp.13466