BYE: Build Your Encoder with One Sequence of Exploration Data for Long-Term Dynamic Scene Understanding

Dynamic scene understanding remains a persistent challenge in robotic applications. Early dynamic mapping methods focused on mitigating the negative influence of short-term dynamic objects on camera motion estimation by masking or tracking specific categories, which often fall short in adapting to l...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Huang, Chenguang, Yan, Shengchao, Burgard, Wolfram
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Huang, Chenguang
Yan, Shengchao
Burgard, Wolfram
description Dynamic scene understanding remains a persistent challenge in robotic applications. Early dynamic mapping methods focused on mitigating the negative influence of short-term dynamic objects on camera motion estimation by masking or tracking specific categories, which often fall short in adapting to long-term scene changes. Recent efforts address object association in long-term dynamic environments using neural networks trained on synthetic datasets, but they still rely on predefined object shapes and categories. Other methods incorporate visual, geometric, or semantic heuristics for the association but often lack robustness. In this work, we introduce BYE, a class-agnostic, per-scene point cloud encoder that removes the need for predefined categories, shape priors, or extensive association datasets. Trained on only a single sequence of exploration data, BYE can efficiently perform object association in dynamically changing scenes. We further propose an ensembling scheme combining the semantic strengths of Vision Language Models (VLMs) with the scene-specific expertise of BYE, achieving a 7% improvement and a 95% success rate in object association tasks. Code and dataset are available at https://byencoder.github.io.
doi_str_mv 10.48550/arxiv.2412.02449
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2412_02449</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2412_02449</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2412_024493</originalsourceid><addsrcrecordid>eNqFzrEOgjAYBOAuDkZ9ACf_FwABS6KOCMbBxAEcmEhTCjaBv1iKwtuLxN3plrvLR8jadWy6931ny3QvX7ZHXc92PEoPc1IGaXSEoJNVDqnqNETIVS40vKV5wA0FxOLZCeQCVAFR31RKMyMVQsgMg0JpuCosrUToGsIBWS05xFyMwzuOP61hmEssl2RWsKoVq18uyOYcJaeLNZGyRsua6SH70rKJtvvf-ACRzUPI</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>BYE: Build Your Encoder with One Sequence of Exploration Data for Long-Term Dynamic Scene Understanding</title><source>arXiv.org</source><creator>Huang, Chenguang ; Yan, Shengchao ; Burgard, Wolfram</creator><creatorcontrib>Huang, Chenguang ; Yan, Shengchao ; Burgard, Wolfram</creatorcontrib><description>Dynamic scene understanding remains a persistent challenge in robotic applications. Early dynamic mapping methods focused on mitigating the negative influence of short-term dynamic objects on camera motion estimation by masking or tracking specific categories, which often fall short in adapting to long-term scene changes. Recent efforts address object association in long-term dynamic environments using neural networks trained on synthetic datasets, but they still rely on predefined object shapes and categories. Other methods incorporate visual, geometric, or semantic heuristics for the association but often lack robustness. In this work, we introduce BYE, a class-agnostic, per-scene point cloud encoder that removes the need for predefined categories, shape priors, or extensive association datasets. Trained on only a single sequence of exploration data, BYE can efficiently perform object association in dynamically changing scenes. We further propose an ensembling scheme combining the semantic strengths of Vision Language Models (VLMs) with the scene-specific expertise of BYE, achieving a 7% improvement and a 95% success rate in object association tasks. Code and dataset are available at https://byencoder.github.io.</description><identifier>DOI: 10.48550/arxiv.2412.02449</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning ; Computer Science - Robotics</subject><creationdate>2024-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2412.02449$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2412.02449$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Huang, Chenguang</creatorcontrib><creatorcontrib>Yan, Shengchao</creatorcontrib><creatorcontrib>Burgard, Wolfram</creatorcontrib><title>BYE: Build Your Encoder with One Sequence of Exploration Data for Long-Term Dynamic Scene Understanding</title><description>Dynamic scene understanding remains a persistent challenge in robotic applications. Early dynamic mapping methods focused on mitigating the negative influence of short-term dynamic objects on camera motion estimation by masking or tracking specific categories, which often fall short in adapting to long-term scene changes. Recent efforts address object association in long-term dynamic environments using neural networks trained on synthetic datasets, but they still rely on predefined object shapes and categories. Other methods incorporate visual, geometric, or semantic heuristics for the association but often lack robustness. In this work, we introduce BYE, a class-agnostic, per-scene point cloud encoder that removes the need for predefined categories, shape priors, or extensive association datasets. Trained on only a single sequence of exploration data, BYE can efficiently perform object association in dynamically changing scenes. We further propose an ensembling scheme combining the semantic strengths of Vision Language Models (VLMs) with the scene-specific expertise of BYE, achieving a 7% improvement and a 95% success rate in object association tasks. Code and dataset are available at https://byencoder.github.io.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFzrEOgjAYBOAuDkZ9ACf_FwABS6KOCMbBxAEcmEhTCjaBv1iKwtuLxN3plrvLR8jadWy6931ny3QvX7ZHXc92PEoPc1IGaXSEoJNVDqnqNETIVS40vKV5wA0FxOLZCeQCVAFR31RKMyMVQsgMg0JpuCosrUToGsIBWS05xFyMwzuOP61hmEssl2RWsKoVq18uyOYcJaeLNZGyRsua6SH70rKJtvvf-ACRzUPI</recordid><startdate>20241203</startdate><enddate>20241203</enddate><creator>Huang, Chenguang</creator><creator>Yan, Shengchao</creator><creator>Burgard, Wolfram</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241203</creationdate><title>BYE: Build Your Encoder with One Sequence of Exploration Data for Long-Term Dynamic Scene Understanding</title><author>Huang, Chenguang ; Yan, Shengchao ; Burgard, Wolfram</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2412_024493</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Huang, Chenguang</creatorcontrib><creatorcontrib>Yan, Shengchao</creatorcontrib><creatorcontrib>Burgard, Wolfram</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Huang, Chenguang</au><au>Yan, Shengchao</au><au>Burgard, Wolfram</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>BYE: Build Your Encoder with One Sequence of Exploration Data for Long-Term Dynamic Scene Understanding</atitle><date>2024-12-03</date><risdate>2024</risdate><abstract>Dynamic scene understanding remains a persistent challenge in robotic applications. Early dynamic mapping methods focused on mitigating the negative influence of short-term dynamic objects on camera motion estimation by masking or tracking specific categories, which often fall short in adapting to long-term scene changes. Recent efforts address object association in long-term dynamic environments using neural networks trained on synthetic datasets, but they still rely on predefined object shapes and categories. Other methods incorporate visual, geometric, or semantic heuristics for the association but often lack robustness. In this work, we introduce BYE, a class-agnostic, per-scene point cloud encoder that removes the need for predefined categories, shape priors, or extensive association datasets. Trained on only a single sequence of exploration data, BYE can efficiently perform object association in dynamically changing scenes. We further propose an ensembling scheme combining the semantic strengths of Vision Language Models (VLMs) with the scene-specific expertise of BYE, achieving a 7% improvement and a 95% success rate in object association tasks. Code and dataset are available at https://byencoder.github.io.</abstract><doi>10.48550/arxiv.2412.02449</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2412.02449
ispartof
issn
language eng
recordid cdi_arxiv_primary_2412_02449
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
Computer Science - Robotics
title BYE: Build Your Encoder with One Sequence of Exploration Data for Long-Term Dynamic Scene Understanding
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T03%3A13%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=BYE:%20Build%20Your%20Encoder%20with%20One%20Sequence%20of%20Exploration%20Data%20for%20Long-Term%20Dynamic%20Scene%20Understanding&rft.au=Huang,%20Chenguang&rft.date=2024-12-03&rft_id=info:doi/10.48550/arxiv.2412.02449&rft_dat=%3Carxiv_GOX%3E2412_02449%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true