CSDA-Net: Seeking reliable correspondences by channel-Spatial difference augment network

•We propose an innovative network for feature matching.•We introduce attention mechanism to extract global context information.•We exploit the Overlay Attention block, to capture local and global context information.•Experimental results show the proposed network is superior to state of the art netw...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Pattern recognition 2022-06, Vol.126, p.108539, Article 108539
Hauptverfasser: Chen, Shunxing, Zheng, Linxin, Xiao, Guobao, Zhong, Zhen, Ma, Jiayi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•We propose an innovative network for feature matching.•We introduce attention mechanism to extract global context information.•We exploit the Overlay Attention block, to capture local and global context information.•Experimental results show the proposed network is superior to state of the art networks. Establishing reliable correspondences is a fundamental task in computer vision, and it requires rich contextual information. In this paper, we propose a Channel-Spatial Difference Augment Network (CSDA-Net), by selectively aggregating information from spatial and channel aspects, to seek reliable correspondences for feature matching. Specifically, we firstly introduce the spatial and channel attention mechanism to construct a simple yet effective block for discriminately extracting the global context. After that, we design a Overlay Attention block by further exploiting the spatial and channel attention mechanism with different squeeze operations, to gather more comprehensive contextual information. Finally, the proposed CSDA-Net is able to achieve feature maps with a strong representative ability for feature matching due to the integration of the two novel blocks. Extensive experiments on outlier rejection and relative pose estimation have shown better performance improvements of our CSDA-Net over current state-of-the-art methods on both outdoor and indoor datasets.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2022.108539