COLCONF: Collaborative ConvNet Features-based Robust Visual Place Recognition for Varying Environments

Several deep learning features were recently proposed for visual place recognition (VPR) purpose. Some of them use the information laid in the image sequences, while others utilize the regions of interest (ROIs) that reside in the feature maps produced by the CNN models. It was shown in the literatu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Arabian journal for science and engineering (2011) 2022-02, Vol.47 (2), p.2381-2395
Hauptverfasser: Abdul Hafez, A. H., Tello, Ammar, Alqaraleh, Saed
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Several deep learning features were recently proposed for visual place recognition (VPR) purpose. Some of them use the information laid in the image sequences, while others utilize the regions of interest (ROIs) that reside in the feature maps produced by the CNN models. It was shown in the literature that features produced from a single layer cannot meet multiple visual challenges. In this work, we present a new collaborative VPR approach, taking the advantage of ROIs feature maps gathered and combined from two different layers in order to improve the recognition performance. An extensive analysis is made on extracting ROIs and the way the performance can differ from one layer to another. Our approach was evaluated over several benchmark datasets including those with viewpoint and appearance challenges. Results have confirmed the robustness of the proposed method compared to the state-of-the-art methods. The area under curve (AUC) and the mean average precision (mAP) measures achieve an average of 91% in comparison with 86% for Max Flow and 72% for CAMAL .
ISSN:2193-567X
1319-8025
2191-4281
DOI:10.1007/s13369-021-06148-8