Spectral-Spatial-Temporal Transformers for Hyperspectral Image Change Detection

Convolutional neural networks (CNNs) with excellent spatial feature extraction abilities have become popular in remote sensing (RS) image change detection (CD). However, CNNs often focus on the extraction of spatial information but ignore important spectral and temporal sequences for hyperspectral i...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on geoscience and remote sensing 2022, Vol.60, p.1-14
Hauptverfasser: Wang, Yanheng, Hong, Danfeng, Sha, Jianjun, Gao, Lianru, Liu, Lian, Zhang, Yonggang, Rong, Xianhui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Convolutional neural networks (CNNs) with excellent spatial feature extraction abilities have become popular in remote sensing (RS) image change detection (CD). However, CNNs often focus on the extraction of spatial information but ignore important spectral and temporal sequences for hyperspectral images (HSIs). In this article, we propose a joint spectral, spatial, and temporal transformer for hyperspectral image change detection (HSI-CD), named SST-Former. First, the SST-Former position-encodes each pixel on the cube to remember the spectral and spatial sequences. Second, a spectral transformer encoder structure is used to extract spectral sequence information. Then, a class token for storing the class information of a single temporal HSI concatenates the output of the spectral transformer encoder. The spatial transformer encoder is used to extract spatial texture information in the next step. Finally, the features of different temporal HSIs are sent as the input of temporal transformer, which is used to extract useful CD features between the current HSI pairs and obtain the binary CD result through multilayer perceptron (MLP). We evaluate the SST-Former on three HSI-CD datasets by numerous experiments, showing that it performs better than other excellent methods both visually and qualitatively. The codes of this work will be available at https://github.com/yanhengwang-heu/IEEE_TGRS_SSTFormer .
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2022.3203075