MedSRGAN: medical images super-resolution using generative adversarial networks
Super-resolution (SR) in medical imaging is an emerging application in medical imaging due to the needs of high quality images acquired with limited radiation dose, such as low dose Computer Tomography (CT), low field magnetic resonance imaging (MRI). However, because of its complexity and higher vi...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2020-08, Vol.79 (29-30), p.21815-21840 |
---|---|
Hauptverfasser: | , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Super-resolution (SR) in medical imaging is an emerging application in medical imaging due to the needs of high quality images acquired with limited radiation dose, such as low dose Computer Tomography (CT), low field magnetic resonance imaging (MRI). However, because of its complexity and higher visual requirements of medical images, SR is still a challenging task in medical imaging. In this study, we developed a deep learning based method called Medical Images SR using Generative Adversarial Networks (MedSRGAN) for SR in medical imaging. A novel convolutional neural network, Residual Whole Map Attention Network (RWMAN) was developed as the generator network for our MedSRGAN in extracting the useful information through different channels, as well as paying more attention on meaningful regions. In addition, a weighted sum of content loss, adversarial loss, and adversarial feature loss were fused to form a multi-task loss function during the MedSRGAN training. 242 thoracic CT scans and 110 brain MRI scans were collected for training and evaluation of MedSRGAN. The results showed that MedSRGAN not only preserves more texture details but also generates more realistic patterns on reconstructed SR images. A mean opinion score (MOS) test on CT slices scored by five experienced radiologists demonstrates the efficiency of our methods. |
---|---|
ISSN: | 1380-7501 1573-7721 |
DOI: | 10.1007/s11042-020-08980-w |