Dehaze-TGGAN: Transformer-Guide Generative Adversarial Networks With Spatial-Spectrum Attention for Unpaired Remote Sensing Dehazing
Satellite imagery plays a critical role in target detection. However, the quality and usability of optical remote sensing images can be severely compromised by atmospheric conditions, particularly haze, which significantly reduces the recognition accuracy of target detection algorithms such as ships...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on geoscience and remote sensing 2024, Vol.62, p.1-20 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Satellite imagery plays a critical role in target detection. However, the quality and usability of optical remote sensing images can be severely compromised by atmospheric conditions, particularly haze, which significantly reduces the recognition accuracy of target detection algorithms such as ships. On the other hand, paired training data, i.e., the remote sensing data with or without fog at the same place, are difficult to obtain in real-world scenarios, leading to the failure of many existing dehazing methods. To deal with these issues, this article proposes a Transformer-Guide CycleGAN framework generative adversarial networks (Dehaze-TGGAN) incorporating an extra attention mechanism from the frequency domain. First, an SSA mechanism is proposed by using a 2-D fast Fourier transform (2D FFT) in the spatial domain, which enables the model to understand the relationships within the three-channel frequency domain information and to recover the spectral features of the hazy image through the spectrum encoder block. Then, a pre-training approach using semi-transparent masks (STM), which can effectively simulate hazy conditions by adjusting the transparency of masks, is presented as a key strategy to accelerate the convergence rate. Finally, the applicability of the transformer architecture is extended by incorporating total variation loss (TV Loss). The results of simulated and measured optical remote sensing data show that the recognition accuracy and the efficiency of the proposed algorithm are greatly improved. |
---|---|
ISSN: | 0196-2892 1558-0644 |
DOI: | 10.1109/TGRS.2024.3435470 |