Learn to Be Clear and Colorful: An End-to-End Network for Panchromatic Image Enhancement
Benefiting from the high coverage and re-visiting frequency, the satellite imagery is an ideal data for large-scale, real-time earth observation. However, due to the limited resolution and chromatic information, the satellite images, especially the panchromatic images, are not capable of being used...
Gespeichert in:
Veröffentlicht in: | IEEE geoscience and remote sensing letters 2022, Vol.19, p.1-5 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Benefiting from the high coverage and re-visiting frequency, the satellite imagery is an ideal data for large-scale, real-time earth observation. However, due to the limited resolution and chromatic information, the satellite images, especially the panchromatic images, are not capable of being used for accurate earth observations, such as road extraction, vehicle detection, and building segmentation. In this research, we propose a cascaded fully convolutional network (CFCN) consists of a residual dense super-resolution network (RDSRN) for grayscale image super-resolution (SR) and a residual deconvolution colorization network (RDCN) for grayscale image colorization. The unique architecture can simultaneously learn texture detail and color information from aerial images and then transfer to enhance panchromatic images. Furthermore, we also introduce an indirect evaluation metric, learned extraction similarity (LES), to estimate the image quality of the generated image in the absence of the ground truth. Experiments on a multispectral image dataset demonstrate that panchromatic images enhanced by the proposed CFCN are with both texture and color fidelity as compared to aerial image. For pre-trained U-Net, compared to the performance on raw panchromatic images, the CFCN enhanced images increase 12.8% of LES of overall accuracy (96.4% versus 83.6%). |
---|---|
ISSN: | 1545-598X 1558-0571 |
DOI: | 10.1109/LGRS.2022.3142994 |