A convolutional neural network for total tumor segmentation in [64Cu]Cu-DOTATATE PET/CT of patients with neuroendocrine neoplasms

Background Segmentation of neuroendocrine neoplasms (NENs) in [ 64 Cu]Cu-DOTATATE positron emission tomography makes it possible to extract quantitative measures useable for prognostication of patients. However, manual tumor segmentation is cumbersome and time-consuming. Therefore, we aimed to imple...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:EJNMMI research 2022-05, Vol.12 (1), p.30-30, Article 30
Hauptverfasser: Carlsen, Esben Andreas, Lindholm, Kristian, Hindsholm, Amalie, Gæde, Mathias, Ladefoged, Claes Nøhr, Loft, Mathias, Johnbeck, Camilla Bardram, Langer, Seppo Wang, Oturai, Peter, Knigge, Ulrich, Kjaer, Andreas, Andersen, Flemming Littrup
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Background Segmentation of neuroendocrine neoplasms (NENs) in [ 64 Cu]Cu-DOTATATE positron emission tomography makes it possible to extract quantitative measures useable for prognostication of patients. However, manual tumor segmentation is cumbersome and time-consuming. Therefore, we aimed to implement and test an artificial intelligence (AI) network for tumor segmentation. Patients with gastroenteropancreatic or lung NEN with [ 64 Cu]Cu-DOTATATE PET/CT performed were included in our training ( n  = 117) and test cohort ( n  = 41). Further, 10 patients with no signs of NEN were included as negative controls. Ground truth segmentations were obtained by a standardized semiautomatic method for tumor segmentation by a physician. The nnU-Net framework was used to set up a deep learning U-net architecture. Dice score, sensitivity and precision were used for selection of the final model. AI segmentations were implemented in a clinical imaging viewer where a physician evaluated performance and performed manual adjustments. Results Cross-validation training was used to generate models and an ensemble model. The ensemble model performed best overall with a lesion-wise dice of 0.850 and pixel-wise dice, precision and sensitivity of 0.801, 0.786 and 0.872, respectively. Performance of the ensemble model was acceptable with some degree of manual adjustment in 35/41 (85%) patients. Final tumor segmentation could be obtained from the AI model with manual adjustments in 5 min versus 17 min for ground truth method, p  
ISSN:2191-219X
2191-219X
DOI:10.1186/s13550-022-00901-2