An Energy-Efficient Processor for Real-Time Semantic LiDAR SLAM in Mobile Robots

Emerging mobile robots require simultaneous localization and mapping (SLAM) systems with advanced 3-D perception and long-range 360 ^{\circ} interaction for autonomous driving. However, previous SLAM processors, which targeted only camera-based visual SLAM, are unsuitable for autonomous driving sys...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE journal of solid-state circuits 2024-09, p.1-13
Hauptverfasser: Jung, Jueun, Kim, Seungbin, Seo, Bokyoung, Jang, Wuyoung, Lee, Sangho, Shin, Jeongmin, Han, Donghyeon, Lee, Kyuho Jason
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Emerging mobile robots require simultaneous localization and mapping (SLAM) systems with advanced 3-D perception and long-range 360 ^{\circ} interaction for autonomous driving. However, previous SLAM processors, which targeted only camera-based visual SLAM, are unsuitable for autonomous driving systems due to their limited field of view (FoV), inaccurate depth estimation, and lack of perception. In contrast, LiDAR offers a long-range 3-D point cloud with precise depth information and 360 ^{\circ} FoV, enabling the capture of fine environmental details. With its high accuracy and environmental robustness, LiDAR SLAM with accurate 3-D perception, semantic LiDAR SLAM, emerges as the most promising solution in autonomous driving systems. Nevertheless, real-time system-on-chip (SoC) implementation of semantic LiDAR SLAM has not been reported, primarily due to memory-intensive and compute-intensive operations caused by the simultaneous execution of multiple algorithms. Moreover, achieving real-time performance has not been feasible, even in the high-performance CPU + GPU. In this article, a real-time and fully integrated semantic LiDAR SLAM processor (LSPU) is presented with semantic LiDAR-PNN-SLAM (LP-SLAM) system, which provides point neural network (PNN)-based 3-D segmentation, localization, and mapping simultaneously. The LSPU executes the LP-SLAM with the following features: 1) a k -nearest neighbor (kNN) cluster with 2-D/3-D spherical coordinate-based bin (SB) searching to eliminate external memory access through dynamic memory allocation; 2) a PNN engine (PNNE) with a global point-level task scheduler (GPTS) to maximize core utilization by two-step workload balancing; 3) a keypoint extraction core (KEC) to skip redundant computation in the sorting operation; and 4) an optimization cluster with reconfigurable computation modes to support keypoint-level pipelining and parallel processing in non-linear optimization (NLO). As a result, the proposed LSPU achieves 20.7 ms of processing time, demonstrating real-time semantic LP-SLAM while consuming 99.89% lower energy compared to modern CPU + GPU platforms.
ISSN:0018-9200
1558-173X
DOI:10.1109/JSSC.2024.3450314