Brightness Alignment Based Coarse-to-Fine Self-Supervised Visual Odometry

Recent research has indicated the tremendous potential of self-supervised monocular visual odometry for various applications, owing to its reduced reliance on extensive training data. Nevertheless, many existing self-supervised visual odometry methods suffer from limitations in estimation accuracy a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on intelligent vehicles 2024, p.1-12
Hauptverfasser: Liang, Yiyou, Zeng, Hui, Zhang, Baoqing, Ye, Yibin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recent research has indicated the tremendous potential of self-supervised monocular visual odometry for various applications, owing to its reduced reliance on extensive training data. Nevertheless, many existing self-supervised visual odometry methods suffer from limitations in estimation accuracy and robustness, particularly under environmental illumination changes. Additionally, the comprehensive exploitation of temporal information within input image sequences remains underexplored for camera ego-motion estimation. To address these challenges, we introduce a novel coarse-to-fine self-supervised visual odometry approach in this paper. Specifically, we design a brightness-aligned pose estimation network aimed at enhance robustness against illumination changes. Moreover, we propose a bidirectional-LSTM-based pose optimization network and two motion-related loss functions to improve pose estimation accuracy by utilizing temporal information. Extensive experiments have been conducted to validate the efficacy of our proposed visual odometry method.
ISSN:2379-8858
2379-8904
DOI:10.1109/TIV.2024.3379575