MR‐based synthetic CT generation using a deep convolutional neural network method
Purpose Interests have been rapidly growing in the field of radiotherapy to replace CT with magnetic resonance imaging (MRI), due to superior soft tissue contrast offered by MRI and the desire to reduce unnecessary radiation dose. MR‐only radiotherapy also simplifies clinical workflow and avoids unc...
Gespeichert in:
Veröffentlicht in: | Medical physics (Lancaster) 2017-04, Vol.44 (4), p.1408-1419 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Purpose
Interests have been rapidly growing in the field of radiotherapy to replace CT with magnetic resonance imaging (MRI), due to superior soft tissue contrast offered by MRI and the desire to reduce unnecessary radiation dose. MR‐only radiotherapy also simplifies clinical workflow and avoids uncertainties in aligning MR with CT. Methods, however, are needed to derive CT‐equivalent representations, often known as synthetic CT (sCT), from patient MR images for dose calculation and DRR‐based patient positioning. Synthetic CT estimation is also important for PET attenuation correction in hybrid PET‐MR systems. We propose in this work a novel deep convolutional neural network (DCNN) method for sCT generation and evaluate its performance on a set of brain tumor patient images.
Methods
The proposed method builds upon recent developments of deep learning and convolutional neural networks in the computer vision literature. The proposed DCNN model has 27 convolutional layers interleaved with pooling and unpooling layers and 35 million free parameters, which can be trained to learn a direct end‐to‐end mapping from MR images to their corresponding CTs. Training such a large model on our limited data is made possible through the principle of transfer learning and by initializing model weights from a pretrained model. Eighteen brain tumor patients with both CT and T1‐weighted MR images are used as experimental data and a sixfold cross‐validation study is performed. Each sCT generated is compared against the real CT image of the same patient on a voxel‐by‐voxel basis. Comparison is also made with respect to an atlas‐based approach that involves deformable atlas registration and patch‐based atlas fusion.
Results
The proposed DCNN method produced a mean absolute error (MAE) below 85 HU for 13 of the 18 test subjects. The overall average MAE was 84.8 ± 17.3 HU for all subjects, which was found to be significantly better than the average MAE of 94.5 ± 17.8 HU for the atlas‐based method. The DCNN method also provided significantly better accuracy when being evaluated using two other metrics: the mean squared error (188.6 ± 33.7 versus 198.3 ± 33.0) and the Pearson correlation coefficient(0.906 ± 0.03 versus 0.896 ± 0.03). Although training a DCNN model can be slow, training only need be done once. Applying a trained model to generate a complete sCT volume for each new patient MR image only took 9 s, which was much faster than the atlas‐based approach.
Conclusions
A DCNN m |
---|---|
ISSN: | 0094-2405 2473-4209 |
DOI: | 10.1002/mp.12155 |