Adjacent-Level Feature Cross-Fusion With 3-D CNN for Remote Sensing Image Change Detection

Deep learning-based (DL-based) change detection (CD) using remote sensing (RS) images has received increasing attention in recent years. However, how to effectively extract and fuse the deep features of bi-temporal images for improving the accuracy of CD is still a challenge. To address that, a nove...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on geoscience and remote sensing 2023, Vol.61, p.1-14
Hauptverfasser: Ye, Yuanxin, Wang, Mengmeng, Zhou, Liang, Lei, Guangyang, Fan, Jianwei, Qin, Yao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Deep learning-based (DL-based) change detection (CD) using remote sensing (RS) images has received increasing attention in recent years. However, how to effectively extract and fuse the deep features of bi-temporal images for improving the accuracy of CD is still a challenge. To address that, a novel adjacent-level feature fusion network with 3-D convolution (named AFCF3D-Net) is proposed in this article. First, through the inner fusion property of 3-D convolution, we design a new feature fusion way that can simultaneously extract and fuse the feature information from bi-temporal images. Then, to alleviate the semantic gap between low-level features and high-level features, we propose an adjacent-level feature cross-fusion (AFCF) module to aggregate complementary feature information between the adjacent levels. Furthermore, the full-scale skip connection strategy is introduced to improve the capability of pixel-wise prediction and the compactness of changed objects in the results. Finally, the proposed AFCF3D-Net has been validated on the three challenging RS CD datasets: the Wuhan building dataset (WHU-CD), the LEVIR building dataset (LEVIR-CD), and the Sun Yat-Sen University dataset (SYSU-CD). The results of quantitative analysis and qualitative comparison demonstrate that the proposed AFCF3D-Net achieves better performance compared to other state-of-the-art (SOTA) methods. The code for this work is available at https://github.com/wm-Githuber/AFCF3D-Net .
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2023.3305499