3D LiDAR Mapping in Dynamic Environments Using a 4D Implicit Neural Representation
Building accurate maps is a key building block to enable reliable localization, planning, and navigation of autonomous vehicles. We propose a novel approach for building accurate maps of dynamic environments utilizing a sequence of LiDAR scans. To this end, we propose encoding the 4D scene into a no...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-05 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Zhong, Xingguang Pan, Yue Stachniss, Cyrill Behley, Jens |
description | Building accurate maps is a key building block to enable reliable localization, planning, and navigation of autonomous vehicles. We propose a novel approach for building accurate maps of dynamic environments utilizing a sequence of LiDAR scans. To this end, we propose encoding the 4D scene into a novel spatio-temporal implicit neural map representation by fitting a time-dependent truncated signed distance function to each point. Using our representation, we extract the static map by filtering the dynamic parts. Our neural representation is based on sparse feature grids, a globally shared decoder, and time-dependent basis functions, which we jointly optimize in an unsupervised fashion. To learn this representation from a sequence of LiDAR scans, we design a simple yet efficient loss function to supervise the map optimization in a piecewise way. We evaluate our approach on various scenes containing moving objects in terms of the reconstruction quality of static maps and the segmentation of dynamic point clouds. The experimental results demonstrate that our method is capable of removing the dynamic part of the input point clouds while reconstructing accurate and complete 3D maps, outperforming several state-of-the-art methods. Codes are available at: https://github.com/PRBonn/4dNDF |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3051697039</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3051697039</sourcerecordid><originalsourceid>FETCH-proquest_journals_30516970393</originalsourceid><addsrcrecordid>eNqNyk8LgjAYgPERBEn5HV7oLMzNP3mMNAqqg9RZhqx4Rbe1zaBvn0EfoNNz-D0zEjDO42iTMLYgoXMdpZRlOUtTHpCal3DCclvDWRiD6gGooHwrMWALlXqh1WqQyju4ua8KSEo4DqbHFj1c5GhFD7U0VrrpEh61WpH5XfROhr8uyXpfXXeHyFj9HKXzTadHqyZqOE3jrMgpL_h_1wcCgT5U</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3051697039</pqid></control><display><type>article</type><title>3D LiDAR Mapping in Dynamic Environments Using a 4D Implicit Neural Representation</title><source>Free E- Journals</source><creator>Zhong, Xingguang ; Pan, Yue ; Stachniss, Cyrill ; Behley, Jens</creator><creatorcontrib>Zhong, Xingguang ; Pan, Yue ; Stachniss, Cyrill ; Behley, Jens</creatorcontrib><description>Building accurate maps is a key building block to enable reliable localization, planning, and navigation of autonomous vehicles. We propose a novel approach for building accurate maps of dynamic environments utilizing a sequence of LiDAR scans. To this end, we propose encoding the 4D scene into a novel spatio-temporal implicit neural map representation by fitting a time-dependent truncated signed distance function to each point. Using our representation, we extract the static map by filtering the dynamic parts. Our neural representation is based on sparse feature grids, a globally shared decoder, and time-dependent basis functions, which we jointly optimize in an unsupervised fashion. To learn this representation from a sequence of LiDAR scans, we design a simple yet efficient loss function to supervise the map optimization in a piecewise way. We evaluate our approach on various scenes containing moving objects in terms of the reconstruction quality of static maps and the segmentation of dynamic point clouds. The experimental results demonstrate that our method is capable of removing the dynamic part of the input point clouds while reconstructing accurate and complete 3D maps, outperforming several state-of-the-art methods. Codes are available at: https://github.com/PRBonn/4dNDF</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Autonomous navigation ; Basis functions ; Image reconstruction ; Image segmentation ; Lidar ; Representations ; Three dimensional models ; Time dependence</subject><ispartof>arXiv.org, 2024-05</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Zhong, Xingguang</creatorcontrib><creatorcontrib>Pan, Yue</creatorcontrib><creatorcontrib>Stachniss, Cyrill</creatorcontrib><creatorcontrib>Behley, Jens</creatorcontrib><title>3D LiDAR Mapping in Dynamic Environments Using a 4D Implicit Neural Representation</title><title>arXiv.org</title><description>Building accurate maps is a key building block to enable reliable localization, planning, and navigation of autonomous vehicles. We propose a novel approach for building accurate maps of dynamic environments utilizing a sequence of LiDAR scans. To this end, we propose encoding the 4D scene into a novel spatio-temporal implicit neural map representation by fitting a time-dependent truncated signed distance function to each point. Using our representation, we extract the static map by filtering the dynamic parts. Our neural representation is based on sparse feature grids, a globally shared decoder, and time-dependent basis functions, which we jointly optimize in an unsupervised fashion. To learn this representation from a sequence of LiDAR scans, we design a simple yet efficient loss function to supervise the map optimization in a piecewise way. We evaluate our approach on various scenes containing moving objects in terms of the reconstruction quality of static maps and the segmentation of dynamic point clouds. The experimental results demonstrate that our method is capable of removing the dynamic part of the input point clouds while reconstructing accurate and complete 3D maps, outperforming several state-of-the-art methods. Codes are available at: https://github.com/PRBonn/4dNDF</description><subject>Autonomous navigation</subject><subject>Basis functions</subject><subject>Image reconstruction</subject><subject>Image segmentation</subject><subject>Lidar</subject><subject>Representations</subject><subject>Three dimensional models</subject><subject>Time dependence</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNyk8LgjAYgPERBEn5HV7oLMzNP3mMNAqqg9RZhqx4Rbe1zaBvn0EfoNNz-D0zEjDO42iTMLYgoXMdpZRlOUtTHpCal3DCclvDWRiD6gGooHwrMWALlXqh1WqQyju4ua8KSEo4DqbHFj1c5GhFD7U0VrrpEh61WpH5XfROhr8uyXpfXXeHyFj9HKXzTadHqyZqOE3jrMgpL_h_1wcCgT5U</recordid><startdate>20240506</startdate><enddate>20240506</enddate><creator>Zhong, Xingguang</creator><creator>Pan, Yue</creator><creator>Stachniss, Cyrill</creator><creator>Behley, Jens</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240506</creationdate><title>3D LiDAR Mapping in Dynamic Environments Using a 4D Implicit Neural Representation</title><author>Zhong, Xingguang ; Pan, Yue ; Stachniss, Cyrill ; Behley, Jens</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30516970393</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Autonomous navigation</topic><topic>Basis functions</topic><topic>Image reconstruction</topic><topic>Image segmentation</topic><topic>Lidar</topic><topic>Representations</topic><topic>Three dimensional models</topic><topic>Time dependence</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhong, Xingguang</creatorcontrib><creatorcontrib>Pan, Yue</creatorcontrib><creatorcontrib>Stachniss, Cyrill</creatorcontrib><creatorcontrib>Behley, Jens</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhong, Xingguang</au><au>Pan, Yue</au><au>Stachniss, Cyrill</au><au>Behley, Jens</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>3D LiDAR Mapping in Dynamic Environments Using a 4D Implicit Neural Representation</atitle><jtitle>arXiv.org</jtitle><date>2024-05-06</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Building accurate maps is a key building block to enable reliable localization, planning, and navigation of autonomous vehicles. We propose a novel approach for building accurate maps of dynamic environments utilizing a sequence of LiDAR scans. To this end, we propose encoding the 4D scene into a novel spatio-temporal implicit neural map representation by fitting a time-dependent truncated signed distance function to each point. Using our representation, we extract the static map by filtering the dynamic parts. Our neural representation is based on sparse feature grids, a globally shared decoder, and time-dependent basis functions, which we jointly optimize in an unsupervised fashion. To learn this representation from a sequence of LiDAR scans, we design a simple yet efficient loss function to supervise the map optimization in a piecewise way. We evaluate our approach on various scenes containing moving objects in terms of the reconstruction quality of static maps and the segmentation of dynamic point clouds. The experimental results demonstrate that our method is capable of removing the dynamic part of the input point clouds while reconstructing accurate and complete 3D maps, outperforming several state-of-the-art methods. Codes are available at: https://github.com/PRBonn/4dNDF</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-05 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3051697039 |
source | Free E- Journals |
subjects | Autonomous navigation Basis functions Image reconstruction Image segmentation Lidar Representations Three dimensional models Time dependence |
title | 3D LiDAR Mapping in Dynamic Environments Using a 4D Implicit Neural Representation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T07%3A33%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=3D%20LiDAR%20Mapping%20in%20Dynamic%20Environments%20Using%20a%204D%20Implicit%20Neural%20Representation&rft.jtitle=arXiv.org&rft.au=Zhong,%20Xingguang&rft.date=2024-05-06&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3051697039%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3051697039&rft_id=info:pmid/&rfr_iscdi=true |