Determining the invasiveness of ground-glass nodules using a 3D multi-task network

Objectives The aim of this study was to determine the invasiveness of ground-glass nodules (GGNs) using a 3D multi-task deep learning network. Methods We propose a novel architecture based on 3D multi-task learning to determine the invasiveness of GGNs. In total, 770 patients with 909 GGNs who under...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:European radiology 2021-09, Vol.31 (9), p.7162-7171
Hauptverfasser: Yu, Ye, Wang, Na, Huang, Ning, Liu, Xinglong, Zheng, Yuanjie, Fu, Yicheng, Li, Xiaoqian, Wu, Huawei, Xu, Jianrong, Cheng, Jiejun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Objectives The aim of this study was to determine the invasiveness of ground-glass nodules (GGNs) using a 3D multi-task deep learning network. Methods We propose a novel architecture based on 3D multi-task learning to determine the invasiveness of GGNs. In total, 770 patients with 909 GGNs who underwent lung CT scans were enrolled. The patients were divided into the training ( n = 626) and test sets ( n = 144). In the test set, invasiveness was classified using deep learning into three categories: atypical adenomatous hyperplasia (AAH) and adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), and invasive pulmonary adenocarcinoma (IA). Furthermore, binary classifications (AAH/AIS/MIA vs. IA) were made by two thoracic radiologists and compared with the deep learning results. Results In the three-category classification task, the sensitivity, specificity, and accuracy were 65.41%, 82.21%, and 64.9%, respectively. In the binary classification task, the sensitivity, specificity, accuracy, and area under the ROC curve (AUC) values were 69.57%, 95.24%, 87.42%, and 0.89, respectively. In the visual assessment of GGN invasiveness of binary classification by the two thoracic radiologists, the sensitivity, specificity, and accuracy of the senior and junior radiologists were 58.93%, 90.51%, and 81.35% and 76.79%, 55.47%, and 61.66%, respectively. Conclusions The proposed multi-task deep learning model achieved good classification results in determining the invasiveness of GGNs. This model may help to select patients with invasive lesions who need surgery and the proper surgical methods. Key Points • The proposed multi-task model has achieved good classification results for the invasiveness of GGNs. • The proposed network includes a classification and segmentation branch to learn global and regional features, respectively. • The multi-task model could assist doctors in selecting patients with invasive lesions who need surgery and choosing appropriate surgical methods.
ISSN:0938-7994
1432-1084
DOI:10.1007/s00330-021-07794-0