Fast Dual-Feature Extraction Based on Tightly Coupled Lightweight Network for Visual Place Recognition

Visual place recognition (VPR) is a task that aims to predict the location of an image based on the existing images. Because image data can often be massive, extracting features efficiently is critical. To solve the problems of model redundancy and poor time efficiency in feature extraction, this st...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2023-01, Vol.11, p.1-1
Hauptverfasser: Hu, Xiaofei, Zhou, Yang, Lyu, Liang, Lan, Chaozhen, Shi, Qunshan, Hou, Mingbo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Visual place recognition (VPR) is a task that aims to predict the location of an image based on the existing images. Because image data can often be massive, extracting features efficiently is critical. To solve the problems of model redundancy and poor time efficiency in feature extraction, this study proposes a fast dual-feature extraction method based on a tightly coupled lightweight network. The tightly coupled network extracts local and global features in a unified model which has a lightweight backbone. Learned step size quantization is then performed to reduce the computational overhead in the inference stage. Additionally, an efficient channel attention module ensures feature representation ability. Efficiency and performance experiments on different hardware platforms showed that the proposed algorithm incurred significant runtime savings for feature extraction, and the inference was 2.9-4.0 times faster than that in the general model. The experimental results confirmed that the proposed method can significantly improve VPR efficiency while ensuring accuracy.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3331371