Progressive auto-segmentation for cone-beam computed tomography-based online adaptive radiotherapy

Accurate and automated segmentation of targets and organs-at-risk (OARs) is crucial for the successful clinical application of online adaptive radiotherapy (ART). Current methods for cone-beam computed tomography (CBCT) auto-segmentation face challenges, resulting in segmentations often failing to r...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Physics and imaging in radiation oncology 2024-07, Vol.31, p.100610, Article 100610
Hauptverfasser: Zhao, Hengrui, Liang, Xiao, Meng, Boyu, Dohopolski, Michael, Choi, Byongsu, Cai, Bin, Lin, Mu-Han, Bai, Ti, Nguyen, Dan, Jiang, Steve
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Accurate and automated segmentation of targets and organs-at-risk (OARs) is crucial for the successful clinical application of online adaptive radiotherapy (ART). Current methods for cone-beam computed tomography (CBCT) auto-segmentation face challenges, resulting in segmentations often failing to reach clinical acceptability. Current approaches for CBCT auto-segmentation overlook the wealth of information available from initial planning and prior adaptive fractions that could enhance segmentation precision. We introduce a novel framework that incorporates data from a patient’s initial plan and previous adaptive fractions, harnessing this additional temporal context to significantly refine the segmentation accuracy for the current fraction’s CBCT images. We present LSTM-UNet, an innovative architecture that integrates Long Short-Term Memory (LSTM) units into the skip connections of the traditional U-Net framework to retain information from previous fractions. The models underwent initial pre-training with simulated data followed by fine-tuning on a clinical dataset. Our proposed model’s segmentation predictions yield an average Dice similarity coefficient of 79% from 8 Head & Neck organs and targets, compared to 52% from a baseline model without prior knowledge and 78% from a baseline model with prior knowledge but no memory. Our proposed model excels beyond baseline segmentation frameworks by effectively utilizing information from prior fractions, thus reducing the effort of clinicians to revise the auto-segmentation results. Moreover, it works together with registration-based methods that offer better prior knowledge. Our model holds promise for integration into the online ART workflow, offering precise segmentation capabilities on synthetic CT images.
ISSN:2405-6316
2405-6316
DOI:10.1016/j.phro.2024.100610