3D LiDAR Mapping in Dynamic Environments Using a 4D Implicit Neural Representation
Building accurate maps is a key building block to enable reliable localization, planning, and navigation of autonomous vehicles. We propose a novel approach for building accurate maps of dynamic environments utilizing a sequence of LiDAR scans. To this end, we propose encoding the 4D scene into a no...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Building accurate maps is a key building block to enable reliable
localization, planning, and navigation of autonomous vehicles. We propose a
novel approach for building accurate maps of dynamic environments utilizing a
sequence of LiDAR scans. To this end, we propose encoding the 4D scene into a
novel spatio-temporal implicit neural map representation by fitting a
time-dependent truncated signed distance function to each point. Using our
representation, we extract the static map by filtering the dynamic parts. Our
neural representation is based on sparse feature grids, a globally shared
decoder, and time-dependent basis functions, which we jointly optimize in an
unsupervised fashion. To learn this representation from a sequence of LiDAR
scans, we design a simple yet efficient loss function to supervise the map
optimization in a piecewise way. We evaluate our approach on various scenes
containing moving objects in terms of the reconstruction quality of static maps
and the segmentation of dynamic point clouds. The experimental results
demonstrate that our method is capable of removing the dynamic part of the
input point clouds while reconstructing accurate and complete 3D maps,
outperforming several state-of-the-art methods. Codes are available at:
https://github.com/PRBonn/4dNDF |
---|---|
DOI: | 10.48550/arxiv.2405.03388 |