Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image

•We propose a very deep network architecture for estimating CT images from MR images directly. It learns an end-to-end mapping between different imaging modalities, without any patch-level pre- or post-processing.•We present a novel embedding strategy, to embed the tentatively synthesized CT image i...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical image analysis 2018-07, Vol.47, p.31-44
Hauptverfasser: Xiang, Lei, Wang, Qian, Nie, Dong, Zhang, Lichi, Jin, Xiyao, Qiao, Yu, Shen, Dinggang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•We propose a very deep network architecture for estimating CT images from MR images directly. It learns an end-to-end mapping between different imaging modalities, without any patch-level pre- or post-processing.•We present a novel embedding strategy, to embed the tentatively synthesized CT image into the feature maps and further transform these features maps forward for better estimating the final CT image.•The experimental results show that our method can be flexibly adapted to different applications. Moreover, our method outperforms the state-of-the-art methods, regarding both the accuracy of estimated CT images and the speed of synthesis process. Recently, more and more attention is drawn to the field of medical image synthesis across modalities. Among them, the synthesis of computed tomography (CT) image from T1-weighted magnetic resonance (MR) image is of great importance, although the mapping between them is highly complex due to large gaps of appearances of the two modalities. In this work, we aim to tackle this MR-to-CT synthesis task by a novel deep embedding convolutional neural network (DECNN). Specifically, we generate the feature maps from MR images, and then transform these feature maps forward through convolutional layers in the network. We can further compute a tentative CT synthesis from the midway of the flow of feature maps, and then embed this tentative CT synthesis result back to the feature maps. This embedding operation results in better feature maps, which are further transformed forward in DECNN. After repeating this embedding procedure for several times in the network, we can eventually synthesize a final CT image in the end of the DECNN. We have validated our proposed method on both brain and prostate imaging datasets, by also comparing with the state-of-the-art methods. Experimental results suggest that our DECNN (with repeated embedding operations) demonstrates its superior performances, in terms of both the perceptive quality of the synthesized CT image and the run-time cost for synthesizing a CT image.
ISSN:1361-8415
1361-8423
DOI:10.1016/j.media.2018.03.011