DRGAN: A Detail Recovery-Based Model for Optical Remote Sensing Images Super-Resolution

The need for high-resolution (HR) remote sensing images has grown significantly in recent years as a result of the rapid advancement of fine-sensing technologies. However, increasing sensor resolution usually requires a costly investment. To tackle this challenge, super-resolution (SR) methods for r...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on geoscience and remote sensing 2025, Vol.63, p.1-13
Hauptverfasser: Song, Yongchao, Sun, Lijun, Bi, Jiping, Quan, Siwen, Wang, Xuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The need for high-resolution (HR) remote sensing images has grown significantly in recent years as a result of the rapid advancement of fine-sensing technologies. However, increasing sensor resolution usually requires a costly investment. To tackle this challenge, super-resolution (SR) methods for remote sensing images have emerged as a cost-effective alternative to enhance the quality and usability of existing low-resolution (LR) images. Although many current methods have achieved some reconstruction results, they often suffer from problems such as transition smoothing and artifacts. To solve these problems, we propose an SR reconstruction model for detail recovery based on generative adversarial networks (GANs), referred to as DRGAN. Specifically, unlike the traditional residual-in-residual dense block network (RRDBNet), we propose a novel dense residual network (OSRRDBNet). It uses dynamic convolution and self-attention mechanisms to recover the rich detailed information in the image more effectively. In addition, we employ an average pooling layer to enhance the ability to capture HR image features. By conducting experiments on three different remote sensing datasets, DRGAN shows remarkable reconstruction results and successfully recovers the rich detail information in the images.
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2024.3512528