Artificial intelligence-based bone-enhanced magnetic resonance image-a computed tomography/magnetic resonance image composite image modality in nasopharyngeal carcinoma radiotherapy
In the radiotherapy of nasopharyngeal carcinoma (NPC), magnetic resonance imaging (MRI) is widely used to delineate tumor area more accurately. While MRI offers the higher soft tissue contrast, patient positioning and couch correction based on bony image fusion of computed tomography (CT) is also ne...
Gespeichert in:
Veröffentlicht in: | Quantitative imaging in medicine and surgery 2021-12, Vol.11 (12), p.4709-4720 |
---|---|
Hauptverfasser: | , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In the radiotherapy of nasopharyngeal carcinoma (NPC), magnetic resonance imaging (MRI) is widely used to delineate tumor area more accurately. While MRI offers the higher soft tissue contrast, patient positioning and couch correction based on bony image fusion of computed tomography (CT) is also necessary. There is thus an urgent need to obtain a high image contrast between bone and soft tissue to facilitate target delineation and patient positioning for NPC radiotherapy. In this paper, our aim is to develop a novel image conversion between the CT and MRI modalities to obtain clear bone and soft tissue images simultaneously, here called bone-enhanced MRI (BeMRI).
Thirty-five patients were retrospectively selected for this study. All patients underwent clinical CT simulation and 1.5T MRI within the same week in Shenzhen Second People's Hospital. To synthesize BeMRI, two deep learning networks, U-Net and CycleGAN, were constructed to transform MRI to synthetic CT (sCT) images. Each network used 28 patients' images as the training set, while the remaining 7 patients were used as the test set (~1/5 of all datasets). The bone structure from the sCT was then extracted by the threshold-based method and embedded in the corresponding part of the MRI image to generate the BeMRI image. To evaluate the performance of these networks, the following metrics were applied: mean absolute error (MAE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR).
In our experiments, both deep learning models achieved good performance and were able to effectively extract bone structure from MRI. Specifically, the supervised U-Net model achieved the best results with the lowest overall average MAE of 125.55 (P |
---|---|
ISSN: | 2223-4292 2223-4306 |
DOI: | 10.21037/qims-20-1239 |