Super-Resolution Reconstruction of Cell Images Based on Generative Adversarial Networks

In this study, we introduce Light-ESRGAN, a novel cellular image super-resolution reconstruction model utilizing Generative Adversarial Networks (GANs). High-resolution (HR) cellular images are pivotal in pathological research; however, how to caputer critical features such as cell edges during micr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2024, Vol.12, p.72252-72263
Hauptverfasser: Pan, Bin, Du, Yifeng, Guo, Xiaoming
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this study, we introduce Light-ESRGAN, a novel cellular image super-resolution reconstruction model utilizing Generative Adversarial Networks (GANs). High-resolution (HR) cellular images are pivotal in pathological research; however, how to caputer critical features such as cell edges during microscopic imaging presents challenges due to hardware limitations and environmental factors. These factors frequently introduce noise and interference. Rapid advancements in deep learning have significantly enhanced the field of image super-resolution reconstruction, demonstrating substantial potential in cellular image processing. Our Light-ESRGAN employs a random degradation modeling process to leverage the GAN architecture for more accurate simulations of real-world degradation. We have also incorporated the Convolutional Block Attention Module (CBAM) into the residual blocks of the generator, thereby enhancing its ability to reconstruct image edges and textures through the fusion of channel and spatial attentions. For the discriminator, a lightweight U-Net structure is adopted, which not only reduces the model's parameter size but also improves its discriminative capacity. Compared to Real-ESRGAN and A-ESRGAN, Light-ESRGAN has reduced model parameters by 34.7% and 66.4%, respectively. It has demonstrated improved performance on publicly available cellular images, increasing the average Peak Signal-to-Noise Ratio (PSNR) by 1.174 dB and 1.992 dB, the Structural Similarity Index (SSIM) by an average of 1.5% and 9.1%, and reducing the Normalized Root Mean Square Error (NRMSE) by an average of 0.005 and 0.009.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3402535