PEANUT: Predicting and Navigating to Unseen Targets

Efficient ObjectGoal navigation (ObjectNav) in novel environments requires an understanding of the spatial and semantic regularities in environment layouts. In this work, we present a straightforward method for learning these regularities by predicting the locations of unobserved objects from incomp...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhai, Albert J, Wang, Shenlong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhai, Albert J
Wang, Shenlong
description Efficient ObjectGoal navigation (ObjectNav) in novel environments requires an understanding of the spatial and semantic regularities in environment layouts. In this work, we present a straightforward method for learning these regularities by predicting the locations of unobserved objects from incomplete semantic maps. Our method differs from previous prediction-based navigation methods, such as frontier potential prediction or egocentric map completion, by directly predicting unseen targets while leveraging the global context from all previously explored areas. Our prediction model is lightweight and can be trained in a supervised manner using a relatively small amount of passively collected data. Once trained, the model can be incorporated into a modular pipeline for ObjectNav without the need for any reinforcement learning. We validate the effectiveness of our method on the HM3D and MP3D ObjectNav datasets. We find that it achieves the state-of-the-art on both datasets, despite not using any additional data for training.
doi_str_mv 10.48550/arxiv.2212.02497
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2212_02497</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2212_02497</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-7efe0b341c1e8e7368c58a11dc6c49a89a476bd2e3c38d64a0f41485521ec0c33</originalsourceid><addsrcrecordid>eNotzs2KwjAYheFsXIjOBbgyN9CavyapOxEdBaku6rp8Jl9LQKukRZy7H6yuDu_m8BAy4yxVNsvYAuIrPFMhuEiZULkZE3narIpzuaSniD64PrQNhdbTAp6hgSH7Oz23HWJLS4gN9t2UjGq4dvjz3Qkpt5tyvUsOx9_9enVIQBuTGKyRXaTijqNFI7V1mQXOvdNO5WBzUEZfvEDppPVaAasVfzMFR8eclBMy_9wO6uoRww3iX_XWV4Ne_gNp8D3t</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>PEANUT: Predicting and Navigating to Unseen Targets</title><source>arXiv.org</source><creator>Zhai, Albert J ; Wang, Shenlong</creator><creatorcontrib>Zhai, Albert J ; Wang, Shenlong</creatorcontrib><description>Efficient ObjectGoal navigation (ObjectNav) in novel environments requires an understanding of the spatial and semantic regularities in environment layouts. In this work, we present a straightforward method for learning these regularities by predicting the locations of unobserved objects from incomplete semantic maps. Our method differs from previous prediction-based navigation methods, such as frontier potential prediction or egocentric map completion, by directly predicting unseen targets while leveraging the global context from all previously explored areas. Our prediction model is lightweight and can be trained in a supervised manner using a relatively small amount of passively collected data. Once trained, the model can be incorporated into a modular pipeline for ObjectNav without the need for any reinforcement learning. We validate the effectiveness of our method on the HM3D and MP3D ObjectNav datasets. We find that it achieves the state-of-the-art on both datasets, despite not using any additional data for training.</description><identifier>DOI: 10.48550/arxiv.2212.02497</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Robotics</subject><creationdate>2022-12</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2212.02497$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2212.02497$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhai, Albert J</creatorcontrib><creatorcontrib>Wang, Shenlong</creatorcontrib><title>PEANUT: Predicting and Navigating to Unseen Targets</title><description>Efficient ObjectGoal navigation (ObjectNav) in novel environments requires an understanding of the spatial and semantic regularities in environment layouts. In this work, we present a straightforward method for learning these regularities by predicting the locations of unobserved objects from incomplete semantic maps. Our method differs from previous prediction-based navigation methods, such as frontier potential prediction or egocentric map completion, by directly predicting unseen targets while leveraging the global context from all previously explored areas. Our prediction model is lightweight and can be trained in a supervised manner using a relatively small amount of passively collected data. Once trained, the model can be incorporated into a modular pipeline for ObjectNav without the need for any reinforcement learning. We validate the effectiveness of our method on the HM3D and MP3D ObjectNav datasets. We find that it achieves the state-of-the-art on both datasets, despite not using any additional data for training.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzs2KwjAYheFsXIjOBbgyN9CavyapOxEdBaku6rp8Jl9LQKukRZy7H6yuDu_m8BAy4yxVNsvYAuIrPFMhuEiZULkZE3narIpzuaSniD64PrQNhdbTAp6hgSH7Oz23HWJLS4gN9t2UjGq4dvjz3Qkpt5tyvUsOx9_9enVIQBuTGKyRXaTijqNFI7V1mQXOvdNO5WBzUEZfvEDppPVaAasVfzMFR8eclBMy_9wO6uoRww3iX_XWV4Ne_gNp8D3t</recordid><startdate>20221205</startdate><enddate>20221205</enddate><creator>Zhai, Albert J</creator><creator>Wang, Shenlong</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20221205</creationdate><title>PEANUT: Predicting and Navigating to Unseen Targets</title><author>Zhai, Albert J ; Wang, Shenlong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-7efe0b341c1e8e7368c58a11dc6c49a89a476bd2e3c38d64a0f41485521ec0c33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhai, Albert J</creatorcontrib><creatorcontrib>Wang, Shenlong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhai, Albert J</au><au>Wang, Shenlong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>PEANUT: Predicting and Navigating to Unseen Targets</atitle><date>2022-12-05</date><risdate>2022</risdate><abstract>Efficient ObjectGoal navigation (ObjectNav) in novel environments requires an understanding of the spatial and semantic regularities in environment layouts. In this work, we present a straightforward method for learning these regularities by predicting the locations of unobserved objects from incomplete semantic maps. Our method differs from previous prediction-based navigation methods, such as frontier potential prediction or egocentric map completion, by directly predicting unseen targets while leveraging the global context from all previously explored areas. Our prediction model is lightweight and can be trained in a supervised manner using a relatively small amount of passively collected data. Once trained, the model can be incorporated into a modular pipeline for ObjectNav without the need for any reinforcement learning. We validate the effectiveness of our method on the HM3D and MP3D ObjectNav datasets. We find that it achieves the state-of-the-art on both datasets, despite not using any additional data for training.</abstract><doi>10.48550/arxiv.2212.02497</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2212.02497
ispartof
issn
language eng
recordid cdi_arxiv_primary_2212_02497
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Robotics
title PEANUT: Predicting and Navigating to Unseen Targets
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T05%3A26%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=PEANUT:%20Predicting%20and%20Navigating%20to%20Unseen%20Targets&rft.au=Zhai,%20Albert%20J&rft.date=2022-12-05&rft_id=info:doi/10.48550/arxiv.2212.02497&rft_dat=%3Carxiv_GOX%3E2212_02497%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true