Automatic Urban Scene-Level Binary Change Detection Based on A Novel Sample Selection Approach and Advanced Triplet Neural Network

Change detection is a process of identifying changed ground objects by comparing image pairs obtained at different times. Compared with the pixel-level and object-level change detection, scene-level change detection can provide the semantic changes at image level, so it is important for many applica...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on geoscience and remote sensing 2023-01, Vol.61, p.1-1
Hauptverfasser: Fang, Hong, Guo, Shanchuan, Wang, Xin, Liu, Sicong, Lin, Cong, Du, Peijun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Change detection is a process of identifying changed ground objects by comparing image pairs obtained at different times. Compared with the pixel-level and object-level change detection, scene-level change detection can provide the semantic changes at image level, so it is important for many applications related to change descriptions and explanations such as urban functional area change monitoring. Automatic scene-level change detection approaches do not require ground truth used for training, making them more appealing in practical applications than non-automatic methods. However, the existing automatic scene-level change detection methods only utilize low-level and mid-level features to extract changes between bi-temporal images, failing to fully exploit the deep information. This paper proposed a novel automatic binary scene-level change detection approach based on deep learning to address these issues. First, the pre-trained VGG-16 and change vector analysis is adopted for scene-level direct pre-detection to produce a scene-level pseudo change map. Second, pixel-level classification is implemented by using decision tree, and a pixel-level to scene-level conversion strategy is designed to generate the other scene-level pseudo change map. Third, the scene-level training samples are obtained by fusing the two pseudo change maps. Finally, the binary scene-level change map is produced by training a novel Scene Change Detection Triplet Network (SCDTN). The proposed SCDTN integrates a late-fusion sub-network and an early-fusion sub-network, comprehensively mining the deep information in each raw image as well as the temporal correlation between two raw images. Experiments were performed on a public dataset and a new challenging dataset, and the results demonstrated the effectiveness and superiority of the proposed approach.
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2023.3235917