Paired cycle‐GAN‐based image correction for quantitative cone‐beam computed tomography
Purpose The incorporation of cone‐beam computed tomography (CBCT) has allowed for enhanced image‐guided radiation therapy. While CBCT allows for daily 3D imaging, images suffer from severe artifacts, limiting the clinical potential of CBCT. In this work, a deep learning‐based method for generating h...
Gespeichert in:
Veröffentlicht in: | Medical physics (Lancaster) 2019-09, Vol.46 (9), p.3998-4009 |
---|---|
Hauptverfasser: | , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Purpose
The incorporation of cone‐beam computed tomography (CBCT) has allowed for enhanced image‐guided radiation therapy. While CBCT allows for daily 3D imaging, images suffer from severe artifacts, limiting the clinical potential of CBCT. In this work, a deep learning‐based method for generating high quality corrected CBCT (CCBCT) images is proposed.
Methods
The proposed method integrates a residual block concept into a cycle‐consistent adversarial network (cycle‐GAN) framework, called res‐cycle GAN, to learn a mapping between CBCT images and paired planning CT images. Compared with a GAN, a cycle‐GAN includes an inverse transformation from CBCT to CT images, which constrains the model by forcing calculation of both a CCBCT and a synthetic CBCT. A fully convolution neural network with residual blocks is used in the generator to enable end‐to‐end CBCT‐to‐CT transformations. The proposed algorithm was evaluated using 24 sets of patient data in the brain and 20 sets of patient data in the pelvis. The mean absolute error (MAE), peak signal‐to‐noise ratio (PSNR), normalized cross‐correlation (NCC) indices, and spatial non‐uniformity (SNU) were used to quantify the correction accuracy of the proposed algorithm. The proposed method is compared to both a conventional scatter correction and another machine learning‐based CBCT correction method.
Results
Overall, the MAE, PSNR, NCC, and SNU were 13.0 HU, 37.5 dB, 0.99, and 0.05 in the brain, 16.1 HU, 30.7 dB, 0.98, and 0.09 in the pelvis for the proposed method, improvements of 45%, 16%, 1%, and 93% in the brain, and 71%, 38%, 2%, and 65% in the pelvis, over the CBCT image. The proposed method showed superior image quality as compared to the scatter correction method, reducing noise and artifact severity. The proposed method produced images with less noise and artifacts than the comparison machine learning‐based method.
Conclusions
The authors have developed a novel deep learning‐based method to generate high‐quality corrected CBCT images. The proposed method increases onboard CBCT image quality, making it comparable to that of the planning CT. With further evaluation and clinical implementation, this method could lead to quantitative adaptive radiation therapy. |
---|---|
ISSN: | 0094-2405 2473-4209 |
DOI: | 10.1002/mp.13656 |