Segmentation-guided Context Learning using EO Object Labels for Stable SAR-to-EO Translation
Recently, the analysis and use of Synthetic Aperture Radar (SAR) imagery have become crucial for surveillance, military operations, and environmental monitoring. A common challenge with SAR images is the presence of speckle noise, which can hinder their interpretability. To enhance the clarity of SA...
Gespeichert in:
Veröffentlicht in: | IEEE geoscience and remote sensing letters 2024-01, Vol.21, p.1-1 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, the analysis and use of Synthetic Aperture Radar (SAR) imagery have become crucial for surveillance, military operations, and environmental monitoring. A common challenge with SAR images is the presence of speckle noise, which can hinder their interpretability. To enhance the clarity of SAR images, this paper introduces a novel SAR-to-Electro-Optical (EO) image translation (SET) network, called SGCL-SET, which firstly incorporates EO object label information for stable translation. We use a pre-trained segmentation network to provide the segmentation regions with their labels into learning the SET. Our SGCL-SET can be trained to effectively learn the translation for the regions of confusing contexts by utilizing the segmentation and label information. Through comprehensive experiments on our KOMPSAT dataset, our SGCL-SET significantly outperforms all the previous methods with large margins across nine image quality evaluation metrics. |
---|---|
ISSN: | 1545-598X 1558-0571 |
DOI: | 10.1109/LGRS.2023.3344804 |