3D LiDAR Mapping in Dynamic Environments Using a 4D Implicit Neural Representation

Building accurate maps is a key building block to enable reliable localization, planning, and navigation of autonomous vehicles. We propose a novel approach for building accurate maps of dynamic environments utilizing a sequence of LiDAR scans. To this end, we propose encoding the 4D scene into a no...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhong, Xingguang, Pan, Yue, Stachniss, Cyrill, Behley, Jens
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhong, Xingguang
Pan, Yue
Stachniss, Cyrill
Behley, Jens
description Building accurate maps is a key building block to enable reliable localization, planning, and navigation of autonomous vehicles. We propose a novel approach for building accurate maps of dynamic environments utilizing a sequence of LiDAR scans. To this end, we propose encoding the 4D scene into a novel spatio-temporal implicit neural map representation by fitting a time-dependent truncated signed distance function to each point. Using our representation, we extract the static map by filtering the dynamic parts. Our neural representation is based on sparse feature grids, a globally shared decoder, and time-dependent basis functions, which we jointly optimize in an unsupervised fashion. To learn this representation from a sequence of LiDAR scans, we design a simple yet efficient loss function to supervise the map optimization in a piecewise way. We evaluate our approach on various scenes containing moving objects in terms of the reconstruction quality of static maps and the segmentation of dynamic point clouds. The experimental results demonstrate that our method is capable of removing the dynamic part of the input point clouds while reconstructing accurate and complete 3D maps, outperforming several state-of-the-art methods. Codes are available at: https://github.com/PRBonn/4dNDF
doi_str_mv 10.48550/arxiv.2405.03388
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2405_03388</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2405_03388</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-5eb8d7110091a677787784dd77dd7578f1b5ace04089ed420461b3d99cda886d3</originalsourceid><addsrcrecordid>eNotj8FqwzAQRHXpoaT9gJ66P2BXiiRLPoY4bQNuCyY5m7WlhAVbMbIbmr-vkxZmGJhZFh5jT4KnymrNXzD-0DldKq5TLqW196ySBZRUrCr4wGGgcAQKUFwC9tTCJpwpnkLvwzTCfryuCKqAbT901NIEn_47YgeVH6If5yuc6BQe2N0Bu9E__ueC7V43u_V7Un69bderMsHM2ET7xjojBOe5mAtj7CzlnDGztbEH0WhsPVfc5t6pJVeZaKTL89ahtZmTC_b89_YGVQ-ReoyX-gpX3-DkL7UMSJk</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>3D LiDAR Mapping in Dynamic Environments Using a 4D Implicit Neural Representation</title><source>arXiv.org</source><creator>Zhong, Xingguang ; Pan, Yue ; Stachniss, Cyrill ; Behley, Jens</creator><creatorcontrib>Zhong, Xingguang ; Pan, Yue ; Stachniss, Cyrill ; Behley, Jens</creatorcontrib><description>Building accurate maps is a key building block to enable reliable localization, planning, and navigation of autonomous vehicles. We propose a novel approach for building accurate maps of dynamic environments utilizing a sequence of LiDAR scans. To this end, we propose encoding the 4D scene into a novel spatio-temporal implicit neural map representation by fitting a time-dependent truncated signed distance function to each point. Using our representation, we extract the static map by filtering the dynamic parts. Our neural representation is based on sparse feature grids, a globally shared decoder, and time-dependent basis functions, which we jointly optimize in an unsupervised fashion. To learn this representation from a sequence of LiDAR scans, we design a simple yet efficient loss function to supervise the map optimization in a piecewise way. We evaluate our approach on various scenes containing moving objects in terms of the reconstruction quality of static maps and the segmentation of dynamic point clouds. The experimental results demonstrate that our method is capable of removing the dynamic part of the input point clouds while reconstructing accurate and complete 3D maps, outperforming several state-of-the-art methods. Codes are available at: https://github.com/PRBonn/4dNDF</description><identifier>DOI: 10.48550/arxiv.2405.03388</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Robotics</subject><creationdate>2024-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2405.03388$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2405.03388$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhong, Xingguang</creatorcontrib><creatorcontrib>Pan, Yue</creatorcontrib><creatorcontrib>Stachniss, Cyrill</creatorcontrib><creatorcontrib>Behley, Jens</creatorcontrib><title>3D LiDAR Mapping in Dynamic Environments Using a 4D Implicit Neural Representation</title><description>Building accurate maps is a key building block to enable reliable localization, planning, and navigation of autonomous vehicles. We propose a novel approach for building accurate maps of dynamic environments utilizing a sequence of LiDAR scans. To this end, we propose encoding the 4D scene into a novel spatio-temporal implicit neural map representation by fitting a time-dependent truncated signed distance function to each point. Using our representation, we extract the static map by filtering the dynamic parts. Our neural representation is based on sparse feature grids, a globally shared decoder, and time-dependent basis functions, which we jointly optimize in an unsupervised fashion. To learn this representation from a sequence of LiDAR scans, we design a simple yet efficient loss function to supervise the map optimization in a piecewise way. We evaluate our approach on various scenes containing moving objects in terms of the reconstruction quality of static maps and the segmentation of dynamic point clouds. The experimental results demonstrate that our method is capable of removing the dynamic part of the input point clouds while reconstructing accurate and complete 3D maps, outperforming several state-of-the-art methods. Codes are available at: https://github.com/PRBonn/4dNDF</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FqwzAQRHXpoaT9gJ66P2BXiiRLPoY4bQNuCyY5m7WlhAVbMbIbmr-vkxZmGJhZFh5jT4KnymrNXzD-0DldKq5TLqW196ySBZRUrCr4wGGgcAQKUFwC9tTCJpwpnkLvwzTCfryuCKqAbT901NIEn_47YgeVH6If5yuc6BQe2N0Bu9E__ueC7V43u_V7Un69bderMsHM2ET7xjojBOe5mAtj7CzlnDGztbEH0WhsPVfc5t6pJVeZaKTL89ahtZmTC_b89_YGVQ-ReoyX-gpX3-DkL7UMSJk</recordid><startdate>20240506</startdate><enddate>20240506</enddate><creator>Zhong, Xingguang</creator><creator>Pan, Yue</creator><creator>Stachniss, Cyrill</creator><creator>Behley, Jens</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240506</creationdate><title>3D LiDAR Mapping in Dynamic Environments Using a 4D Implicit Neural Representation</title><author>Zhong, Xingguang ; Pan, Yue ; Stachniss, Cyrill ; Behley, Jens</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-5eb8d7110091a677787784dd77dd7578f1b5ace04089ed420461b3d99cda886d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhong, Xingguang</creatorcontrib><creatorcontrib>Pan, Yue</creatorcontrib><creatorcontrib>Stachniss, Cyrill</creatorcontrib><creatorcontrib>Behley, Jens</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhong, Xingguang</au><au>Pan, Yue</au><au>Stachniss, Cyrill</au><au>Behley, Jens</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>3D LiDAR Mapping in Dynamic Environments Using a 4D Implicit Neural Representation</atitle><date>2024-05-06</date><risdate>2024</risdate><abstract>Building accurate maps is a key building block to enable reliable localization, planning, and navigation of autonomous vehicles. We propose a novel approach for building accurate maps of dynamic environments utilizing a sequence of LiDAR scans. To this end, we propose encoding the 4D scene into a novel spatio-temporal implicit neural map representation by fitting a time-dependent truncated signed distance function to each point. Using our representation, we extract the static map by filtering the dynamic parts. Our neural representation is based on sparse feature grids, a globally shared decoder, and time-dependent basis functions, which we jointly optimize in an unsupervised fashion. To learn this representation from a sequence of LiDAR scans, we design a simple yet efficient loss function to supervise the map optimization in a piecewise way. We evaluate our approach on various scenes containing moving objects in terms of the reconstruction quality of static maps and the segmentation of dynamic point clouds. The experimental results demonstrate that our method is capable of removing the dynamic part of the input point clouds while reconstructing accurate and complete 3D maps, outperforming several state-of-the-art methods. Codes are available at: https://github.com/PRBonn/4dNDF</abstract><doi>10.48550/arxiv.2405.03388</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2405.03388
ispartof
issn
language eng
recordid cdi_arxiv_primary_2405_03388
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Robotics
title 3D LiDAR Mapping in Dynamic Environments Using a 4D Implicit Neural Representation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T20%3A22%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=3D%20LiDAR%20Mapping%20in%20Dynamic%20Environments%20Using%20a%204D%20Implicit%20Neural%20Representation&rft.au=Zhong,%20Xingguang&rft.date=2024-05-06&rft_id=info:doi/10.48550/arxiv.2405.03388&rft_dat=%3Carxiv_GOX%3E2405_03388%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true