DMNet: A Network Architecture Using Dilated Convolution and Multiscale Mechanisms for Spatiotemporal Fusion of Remote Sensing Images

Since remote sensing images cannot have both high temporal resolution and high spatial resolution, spatiotemporal fusion of remote sensing images has attracted increasing attention in recent years. Additionally, with the successful application of deep learning in various fields, spatiotemporal fusio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE sensors journal 2020-10, Vol.20 (20), p.12190-12202
Hauptverfasser: Li, Weisheng, Zhang, Xiayan, Peng, Yidong, Dong, Meilin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Since remote sensing images cannot have both high temporal resolution and high spatial resolution, spatiotemporal fusion of remote sensing images has attracted increasing attention in recent years. Additionally, with the successful application of deep learning in various fields, spatiotemporal fusion algorithms based on deep learning have also gradually diversified. We propose a network framework that is based on deep convolutional neural networks that incorporate dilated convolution and multiscale mechanisms, we refer to this network framework as DMNet. In this method, we concatenate the feature maps that need to be fused to avoid using complex fusion methods to introduce noise. Then, the multiscale mechanism can extract the context information of the image at various scales, and make the image details more abundant. By adding skip connections, feature maps in shallow convolutional layers can be obtained to avoid losing important features of the image during the convolution. Additionally, dilated convolution expands the receptive field of the convolution kernel, which is conducive to the extraction of small detail features. To evaluate the robustness of our method, we conduct experiments on two datasets and compare the results with those obtained by six representative spatiotemporal fusion methods. Both intuitive and objective results demonstrate the superior performance of our method.
ISSN:1530-437X
1558-1748
DOI:10.1109/JSEN.2020.3000249