Adaptive Multitype Contrastive Views Generation for Remote Sensing Image Semantic Segmentation
Self-supervised contrastive learning is a powerful pretraining framework for learning the invariant features from the different views of remote sensing images, therefore, the performance of contrastive learning heavily depends on the generation of views. Current view generation is primarily accompli...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on geoscience and remote sensing 2025, Vol.63, p.1-13 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Self-supervised contrastive learning is a powerful pretraining framework for learning the invariant features from the different views of remote sensing images, therefore, the performance of contrastive learning heavily depends on the generation of views. Current view generation is primarily accomplished through different transformations, and the types and parameters of the transformations are required hand-crafted. Hence, the diversity and discriminability of generated views cannot be guaranteed. To address this, we propose a multitype views optimization method to optimize these transformations. We formulate contrastive learning as a min-max optimization problem, and transformation parameters are optimized by maximizing the contrastive loss. The optimized transformations encourage the negative sample pairs to be close and the positive sample pairs to be far apart. Different from the current adversarial view generation methods, our method can optimize both photometric transformations and geometric transformations. For remote sensing images, the geometric transformation is more critical for view generation, while the existing view optimization methods fail to achieve this. We consider the hue, saturation, brightness, contrast, and geometric rotation transformations in contrastive learning, and evaluate the optimized views on the downstream remote sensing images semantic segmentation task. Extensive experiments are carried out on the three remote sensing image segmentation datasets, including the ISPRS Potsdam dataset, the ISPRS Vaihingen dataset, and the LoveDA dataset. Results show that the learned views obtain high advantages compared to the hand-crafted views and other optimized views. The code associated with this article has been released and can be accessed at https://github.com/AAAA-CS/AMView . |
---|---|
ISSN: | 0196-2892 1558-0644 |
DOI: | 10.1109/TGRS.2024.3525133 |