Blind Superresolution of Satellite Videos by Ghost Module-based Convolutional Networks
Deep learning (DL) based video satellite superresolution (SR) methods have recently yielded superior performance over traditional model-based methods by using an end-to-end manner. Existing DL-based methods usually assume that the blur kernels are known and thus do not model the blur kernels during...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on geoscience and remote sensing 2023-01, Vol.61, p.1-1 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep learning (DL) based video satellite superresolution (SR) methods have recently yielded superior performance over traditional model-based methods by using an end-to-end manner. Existing DL-based methods usually assume that the blur kernels are known and thus do not model the blur kernels during restoration. However, this assumption is rarely held for real satellite videos and leads to over-smoothed results. In this paper, we propose a Ghost module-based convolution network model for blind SR of satellite videos. The proposed Ghost module-based video SR method (GVSR), which assumes the blur kernel is unknown, consists of two main modules, i.e., the preliminary image generation module and SR results reconstruction module. First, the motion information from adjacent video frames and the wrapped images are explored by an optical flow estimation network, the blur kernel is flexibly obtained by a blur kernel estimation network, and the preliminary high-resolution image is generated by feeding both blur kernel and wrapped images. Second, a reconstruction network consisting of three paths with attention-based Ghost (AG) bottlenecks is designed to remove artifacts in the preliminary image and obtain the final high-quality SR results. Experiments conducted on Jilin-1 and OVS-1 satellite videos demonstrate that the qualitative and quantitative performance of our proposed method is superior to current state-of-the-art methods. |
---|---|
ISSN: | 0196-2892 1558-0644 |
DOI: | 10.1109/TGRS.2022.3233099 |