Coarse-to-Fine Lung Nodule Segmentation in CT Images with Image Enhancement and Dual-branch Network

Lung nodule segmentation in CT images plays an important role in clinical diagnosis and treatment of lung cancers. Among different types of nodules, the solitary nodules usually have clear boundaries and the segmentation is relatively easy, while the segmentation of non-solitary nodules with ambiguo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2021-01, Vol.9, p.1-1
Hauptverfasser: Wu, Zhitong, Zhou, Qianjun, Wang, Feng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Lung nodule segmentation in CT images plays an important role in clinical diagnosis and treatment of lung cancers. Among different types of nodules, the solitary nodules usually have clear boundaries and the segmentation is relatively easy, while the segmentation of non-solitary nodules with ambiguous boundaries remains challenging for both human and computer. In this paper, we propose a coarse-to-fine lung nodule segmentation method by combining image enhancement and a Dual-branch neural network. First, we preprocess the image to enhance the discrimination of the nodules and roughly locate the lesion area so that we can eliminate the noises from background and focus on learning the features around the boundaries. Second, we propose a Dual-branch network based on U-Net (DB U-Net) which can effectively explore information from both 2D slices and the relationships between neighboring slices for more precise and consistent segmentation. In addition, we construct a dataset which is mainly composed of non-solitary nodules. The proposed image enhancement method improves the effectiveness of network learning, while the dual-branch neural network explores multi-view information. The Dice coefficients of nodule segmentation on the LIDC dataset and our own dataset are 83.16% and 81.97% respectively, which significantly outperforms the existing works.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2021.3049379