IQ-VIO: adaptive visual inertial odometry via interference quantization under dynamic environments
Vision-based localization is susceptible to interference from dynamic objects in the environment, resulting in decreased localization accuracy and even tracking loss. Hence, sensor fusion with IMUs or motor encoders has been widely adopted to improve positioning accuracy and robustness in dynamic en...
Gespeichert in:
Veröffentlicht in: | Intelligent service robotics 2023-11, Vol.16 (5), p.565-581 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Vision-based localization is susceptible to interference from dynamic objects in the environment, resulting in decreased localization accuracy and even tracking loss. Hence, sensor fusion with IMUs or motor encoders has been widely adopted to improve positioning accuracy and robustness in dynamic environments. Commonly used loose coupling fusion localization methods cannot completely eliminate the error introduced by dynamic objects. In this paper, we propose a novel adaptive visual inertial odometry via interference quantization, namely IQ-VIO. To quantify the confidence of pose estimation through vision frames analysis, we firstly introduce the feature coverage and the dynamic scene interference index based on image information entropy. Then, based on the interference index, we further establish the IQ-VIO multi-sensor fusion model, which can adaptively adjust the measurement error covariance matrix of an extended Kalman filter to suppress and eliminate the impact of dynamic objects on localization. We verify IQ-VIO algorithm on KAIST Urban dataset and actual scenes. Results show that our method achieves favorable performance against other algorithms. Especially under challenging scenes such as low texture, the RPE of our algorithm decreases at least twenty percent. Our approach can effectively eliminate the impact of dynamic objects in the scenes and obtain higher positioning accuracy and robustness than conventional methods. |
---|---|
ISSN: | 1861-2776 1861-2784 |
DOI: | 10.1007/s11370-023-00478-2 |