Cloud-Guided Fusion With SAR-to-Optical Translation for Thick Cloud Removal

Deep learning has been widely used in thick cloud removal (TCR) for optical satellite images. Since thick clouds completely block the surface, synthetic aperture radar (SAR) images have recently been used to assist in the recovery of occluded information. However, this approach faces several challen...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on geoscience and remote sensing 2024, Vol.62, p.1-15
Hauptverfasser: Xiang, Xuanyu, Tan, Yihua, Yan, Longfei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Deep learning has been widely used in thick cloud removal (TCR) for optical satellite images. Since thick clouds completely block the surface, synthetic aperture radar (SAR) images have recently been used to assist in the recovery of occluded information. However, this approach faces several challenges: 1) the significant domain gap between SAR and optical features can cause interference in recovering occluded optical information from SAR and 2) the TCR methods need to distinguish between cloudy and non-cloudy regions; otherwise, inconsistencies may arise between the recovered regions and the remaining non-cloudy regions. To this end, we propose a new SAR-assisted TCR method based on a two-step fusion framework, which consists of the feature alignment translation (FAT) network and the cloud-guided fusion (CGF) network. First, the FAT leverages the common features between SAR and optical images to translate SAR images into corresponding optical images, thus recovering the occluded information. Considering the gap between the translated images and the real cloud-free images, the CGF utilizes cloudy images to further refine the translated images, resulting in the cloud-removed images. In the CGF, cloud distribution is predicted to distinguish between cloudy and non-cloudy regions. Then, the cloud distribution is used to guide the refinement of recovered regions using non-cloudy regions. Extensive experiments on both simulated and real datasets show that the proposed algorithm achieves better performance compared with the state-of-the-art methods.
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2024.3431556