Fat-saturated image generation from multi-contrast MRIs using generative adversarial networks with Bloch equation-based autoencoder regularization

•Bloch equation-based autoencoder regularization GAN (BlochGAN).•BlochGAN uses multi-contrast MR images to generate other contrast images.•BlochGAN can learn the multi-contrast MR relationship based on the Bloch equation.•Performance of BlochGAN is quantitatively and qualitatively demonstrated.•Bloc...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical image analysis 2021-10, Vol.73, p.102198-102198, Article 102198
Hauptverfasser: Kim, Sewon, Jang, Hanbyol, Hong, Seokjun, Hong, Yeong Sang, Bae, Won C., Kim, Sungjun, Hwang, Dosik
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Bloch equation-based autoencoder regularization GAN (BlochGAN).•BlochGAN uses multi-contrast MR images to generate other contrast images.•BlochGAN can learn the multi-contrast MR relationship based on the Bloch equation.•Performance of BlochGAN is quantitatively and qualitatively demonstrated.•BlochGAN can reduce the scan time of multi-contrast MR images. [Display omitted] Obtaining multiple series of magnetic resonance (MR) images with different contrasts is useful for accurate diagnosis of human spinal conditions. However, this can be time consuming and a burden on both the patient and the hospital. We propose a Bloch equation-based autoencoder regularization generative adversarial network (BlochGAN) to generate a fat saturation T2-weighted (T2 FS) image from T1-weighted (T1-w) and T2-weighted (T2-w) images of human spine. To achieve this, our approach was to utilize the relationship between the contrasts using Bloch equation since it is a fundamental principle of MR physics and serves as a physical basis of each contrasts. BlochGAN properly generated the target-contrast images using the autoencoder regularization based on the Bloch equation to identify the physical basis of the contrasts. BlochGAN consists of four sub-networks: an encoder, a decoder, a generator, and a discriminator. The encoder extracts features from the multi-contrast input images, and the generator creates target T2 FS images using the features extracted from the encoder. The discriminator assists network learning by providing adversarial loss, and the decoder reconstructs the input multi-contrast images and regularizes the learning process by providing reconstruction loss. The discriminator and the decoder are only used in the training process. Our results demonstrate that BlochGAN achieved quantitatively and qualitatively superior performance compared to conventional medical image synthesis methods in generating spine T2 FS images from T1-w, and T2-w images.
ISSN:1361-8415
1361-8423
DOI:10.1016/j.media.2021.102198