LoLa-SLAM: Low-Latency LiDAR SLAM Using Continuous Scan Slicing

Real-time 6D pose estimation is a key component for autonomous indoor navigation of Unmanned Aerial Vehicles (UAVs). This letter presents a low-latency LiDAR SLAM framework based on LiDAR scan slicing and concurrent matching, called LoLa-SLAM. Our framework uses sliced point cloud data from a rotati...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE robotics and automation letters 2021-04, Vol.6 (2), p.2248-2255
Hauptverfasser: Karimi, Mojtaba, Oelsch, Martin, Stengel, Oliver, Babaians, Edwin, Steinbach, Eckehard
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Real-time 6D pose estimation is a key component for autonomous indoor navigation of Unmanned Aerial Vehicles (UAVs). This letter presents a low-latency LiDAR SLAM framework based on LiDAR scan slicing and concurrent matching, called LoLa-SLAM. Our framework uses sliced point cloud data from a rotating LiDAR in a concurrent multi-threaded matching pipeline for 6D pose estimation with high update rate and low latency. The LiDAR is actuated using a 2D Lissajous spinning pattern to overcome the sensor's limited FoV. We propose a two-dimensional roughness model to extract the feature points for fine matching and registration of the point cloud. In addition, the pose estimator engages a temporal motion predictor that assists in finding the feature correspondences in the map for the fast convergence of the non-linear optimizer. Subsequently, an Extended Kalman Filter (EKF) is adopted for final pose fusion. The framework is evaluated in multiple experiments by comparing the accuracy, latency, and the update rate of the pose estimation for the trajectories flown in an indoor environment. We quantify the superior quality of the generated volumetric map in comparison to the state-of-the-art frameworks. We further examine the localization precision using ground truth pose information recorded by a total station unit.
ISSN:2377-3766
2377-3766
DOI:10.1109/LRA.2021.3060721