Self-Supervised SAR-Optical Data Fusion of Sentinel-1/-2 Images

The effective combination of the complementary information provided by huge amount of unlabeled multisensor data (e.g., synthetic aperture radar (SAR) and optical images) is a critical issue in remote sensing. Recently, contrastive learning methods have reached remarkable success in obtaining meanin...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on geoscience and remote sensing 2022, Vol.60, p.1-11
Hauptverfasser: Chen, Yuxing, Bruzzone, Lorenzo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The effective combination of the complementary information provided by huge amount of unlabeled multisensor data (e.g., synthetic aperture radar (SAR) and optical images) is a critical issue in remote sensing. Recently, contrastive learning methods have reached remarkable success in obtaining meaningful feature representations from multiview data. However, these methods only focus on image-level features, which may not satisfy the requirement for dense prediction tasks such as land-cover mapping. In this work, we propose a self-supervised framework for SAR-optical data fusion and land-cover mapping tasks. SAR and optical images are fused by using a multiview contrastive loss at image level and super-pixel level according to one of those possible strategies: in the early, intermediate, and late strategies. For the land-cover mapping task, we assign each pixel a land-cover class by the joint use of pretrained features and spectral information of the image itself. Experimental results show that the proposed approach not only achieves a comparable accuracy but also reduces the dimension of features with respect to the image-level contrastive learning method. Among three fusion strategies, the intermediate fusion strategy achieves the best performance. The combination of the pixel-level fusion approach and the self-training on spectral indices leads to further improvements in the land-cover mapping task with respect to the image-level fusion approach, especially with sparse pseudo labels. The code to reproduce our results will be found at https://github.com/yusin2it/SARoptical_fusion .
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2021.3128072