LSV-ANet: Deep Learning on Local Structure Visualization for Feature Matching

Feature matching is a fundamental and important task in many applications of remote sensing and photogrammetry. Remote sensing images often involve complex spatial relationships due to the ground relief variations and imaging viewpoint changes. Therefore, using a pre-defined geometrical model will p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on geoscience and remote sensing 2022, Vol.60, p.1-18
Hauptverfasser: Chen, Jiaxuan, Chen, Shuang, Chen, Xiaoxian, Yang, Yang, Xing, Linjie, Fan, Xiaoyan, Rao, Yujing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Feature matching is a fundamental and important task in many applications of remote sensing and photogrammetry. Remote sensing images often involve complex spatial relationships due to the ground relief variations and imaging viewpoint changes. Therefore, using a pre-defined geometrical model will probably lead to inferior matching accuracy. In order to find good correspondences, we propose a simple yet efficient deep learning network, which we term the "local structure visualization-attention" network (LSV-ANet). Our main aim is to transform outlier detection into a dynamic visual similarity evaluation. Specifically, we first map the local spatial distribution into a regular grid as descriptor LSV, and then customized a spatial SCale Attention (SCA) module and a spatial STructure Attention (STA) module, which explicitly allows structure manipulation and scale selection of LSV within the network. Finally, the embedded SCA and STA deduce optimal LSV for solving feature matching task by training the LSV-ANet end-to-end. In order to demonstrate the robustness and universality of our LSV-ANet, extensive experiments on various real image pairs for general feature matching are conducted and compared against eight state-of-the-art methods. The experiment results demonstrate the superiority of our method over state of the art.
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2021.3062498