A Robust Descriptor Based on Modality-Independent Neighborhood Information for Optical-SAR Image Matching

Due to the intensity differences and speckle noise, automatic optical-synthetic aperture radar (SAR) image matching is still a challenging task. This letter addresses this problem by proposing a novel descriptor (MaskMIND) with three different modes using modality-independent neighborhood informatio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE geoscience and remote sensing letters 2022, Vol.19, p.1-5
Hauptverfasser: Yu, Qiuze, Zhao, Wensen, Jiang, Yuxuan, Wang, Ruikai, Xiao, Jinsheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Due to the intensity differences and speckle noise, automatic optical-synthetic aperture radar (SAR) image matching is still a challenging task. This letter addresses this problem by proposing a novel descriptor (MaskMIND) with three different modes using modality-independent neighborhood information. This descriptor aims to sample and active relative structural information to improve accuracy and precision. In addition, the gradient maps are calculated, respectively, in pretreatment to eliminate noise. Then the corresponding metric, which takes into account the increasing positional uncertainty with distance, is defined using the sum of squared differences (SSDs) accelerated by fast Fourier transform (FFT). Our methods are effective because of their relativeness and abstractness. The experimental results in five optical-SAR image pairs show that our methods have great performance and potential. Compared with channel features of orientated gradients (CFOG), which is the state-of-the-art method, the accuracy of our sMaskMIND-grids is improved by 12% on average.
ISSN:1545-598X
1558-0571
DOI:10.1109/LGRS.2022.3211271