CLRNet: A Residual Network Based on ConvLSTM for Progressive Pansharpening
In this letter, we design a progressive pansharpening network termed CLRNet, which cascades two deep residual subnets (DRNets) with the same structure and then employs these two subnets to perform progressive fusion at two scales, gradually fusing panchromatic (PAN) images and low-resolution multisp...
Gespeichert in:
Veröffentlicht in: | IEEE geoscience and remote sensing letters 2024, Vol.21, p.1-5 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this letter, we design a progressive pansharpening network termed CLRNet, which cascades two deep residual subnets (DRNets) with the same structure and then employs these two subnets to perform progressive fusion at two scales, gradually fusing panchromatic (PAN) images and low-resolution multispectral (LRMS) images. DRNet cascades multiple convolutional long short-term memory (ConvLSTM) units that can capture the dependence relationships of hierarchical features. First, considering the sensitivity of spectral features to hierarchy and spatial features to scale, we have constructed a deep progressive pansharpening network to comprehensively represent the original information. Second, with the number of network layers increased, the high frequency of feature maps in deep networks is gradually smoothed out. Therefore, introducing residual learning into the network can enhance our attention to texture details and improve the spatial resolution of fusion results. Finally, when extracting hierarchical features from deep networks, deep feature maps have a strong dependence on shallow feature maps. We capture the differences among hierarchical features and the differences among multiscale features, obtaining rich spatial features and realistic spectral features. The proposed CLRNet method achieves a quality with no reference (QNR) of 0.926, a structural similarity index measure (SSIM) of 0.984, and the relative dimensionless global error in synthesis (ERGAS) reduced to 0.603 on GaoFen-2 datasets, leading to a significant improvement compared with other state-of-the-art (SOTA) methods. |
---|---|
ISSN: | 1545-598X 1558-0571 |
DOI: | 10.1109/LGRS.2024.3412685 |