Improved monocular visual-inertial odometry with point and line features using adaptive line feature extraction

With the development of intelligent era, the application of UAVs is more and more extensive, and positioning and navigation technology is the key. Among them, the monocular VIO algorithm, with its advantages of lightweight and low cost, is the research focus of scholars today. However, VIO algorithm...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2024, Vol.83 (1), p.1481-1512
Hauptverfasser: Feng, Liying, Zhang, Xiaoli, Peng, Xiafu, Zhuang, Mingxi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:With the development of intelligent era, the application of UAVs is more and more extensive, and positioning and navigation technology is the key. Among them, the monocular VIO algorithm, with its advantages of lightweight and low cost, is the research focus of scholars today. However, VIO algorithm based on point features is easy to fail to locate in environments with weak texture, because it cannot extract enough corner features. In this paper, we propose a visual-inertial odometry based on point and line features, which is a tightly-coupled monocular one optimized in real time. In order to improve the positioning accuracy in the low light environments, we propose an image preprocessing algorithm based on adaptive strategy to increase the number of features detected. In order to reduce the difficulty of line feature matching, the method of grouping first and then merging is used to merge the wrongly segmented long line segment features. This also reduces the chasm of line features. At the same time, in the initialization procedure, a least square problem is used to estimate the acceleration bias to improve accuracy in localization. Experiments in the benchmark EuRoc dataset show that under the same operating environments and experimental parameter settings, our algorithm has higher accuracy and better robustness in localization than several mainstream visual-inertial odometry algorithms that only use point features. At the same time, compared with PLF-VINS, IPL-VINS and PL-VINS, the error in localization of our algorithm is reduced by about 0.6%, 18.9% and 9.7%.
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-023-15597-2