RREx-BoT: Remote Referring Expressions with a Bag of Tricks
Household robots operate in the same space for years. Such robots incrementally build dynamic maps that can be used for tasks requiring remote object localization. However, benchmarks in robot learning often test generalization through inference on tasks in unobserved environments. In an observed en...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Sigurdsson, Gunnar A Thomason, Jesse Sukhatme, Gaurav S Piramuthu, Robinson |
description | Household robots operate in the same space for years. Such robots
incrementally build dynamic maps that can be used for tasks requiring remote
object localization. However, benchmarks in robot learning often test
generalization through inference on tasks in unobserved environments. In an
observed environment, locating an object is reduced to choosing from among all
object proposals in the environment, which may number in the 100,000s. Armed
with this intuition, using only a generic vision-language scoring model with
minor modifications for 3d encoding and operating in an embodied environment,
we demonstrate an absolute performance gain of 9.84% on remote object grounding
above state of the art models for REVERIE and of 5.04% on FAO. When allowed to
pre-explore an environment, we also exceed the previous state of the art
pre-exploration method on REVERIE. Additionally, we demonstrate our model on a
real-world TurtleBot platform, highlighting the simplicity and usefulness of
the approach. Our analysis outlines a "bag of tricks" essential for
accomplishing this task, from utilizing 3d coordinates and context, to
generalizing vision-language models to large 3d search spaces. |
doi_str_mv | 10.48550/arxiv.2301.12614 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2301_12614</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2301_12614</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-5206ab12d62fb4e0e6a679d6ccbed80e80798491bf525155000bb4d50acd78de3</originalsourceid><addsrcrecordid>eNotj71OwzAURr0woMIDMOEXSLh2bMdpJ1oFqFSpUpQ9suPrYvUnlV1BeHtC6XSkb_h0DiFPDHKhpYQXE8fwlfMCWM64YuKeLJqmHrPl0M5pg8fhghM8xhhOO1qP54gpheGU6He4fFJDl2ZHB0_bGPp9eiB33hwSPt44I-1b3a4-ss32fb163WRGlSKTHJSxjDvFvRUIqKa5cqrvLToNqKGstKiY9ZJLNlkCWCucBNO7UjssZuT5__Zq351jOJr40_1VdNeK4hesl0C8</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>RREx-BoT: Remote Referring Expressions with a Bag of Tricks</title><source>arXiv.org</source><creator>Sigurdsson, Gunnar A ; Thomason, Jesse ; Sukhatme, Gaurav S ; Piramuthu, Robinson</creator><creatorcontrib>Sigurdsson, Gunnar A ; Thomason, Jesse ; Sukhatme, Gaurav S ; Piramuthu, Robinson</creatorcontrib><description>Household robots operate in the same space for years. Such robots
incrementally build dynamic maps that can be used for tasks requiring remote
object localization. However, benchmarks in robot learning often test
generalization through inference on tasks in unobserved environments. In an
observed environment, locating an object is reduced to choosing from among all
object proposals in the environment, which may number in the 100,000s. Armed
with this intuition, using only a generic vision-language scoring model with
minor modifications for 3d encoding and operating in an embodied environment,
we demonstrate an absolute performance gain of 9.84% on remote object grounding
above state of the art models for REVERIE and of 5.04% on FAO. When allowed to
pre-explore an environment, we also exceed the previous state of the art
pre-exploration method on REVERIE. Additionally, we demonstrate our model on a
real-world TurtleBot platform, highlighting the simplicity and usefulness of
the approach. Our analysis outlines a "bag of tricks" essential for
accomplishing this task, from utilizing 3d coordinates and context, to
generalizing vision-language models to large 3d search spaces.</description><identifier>DOI: 10.48550/arxiv.2301.12614</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Robotics</subject><creationdate>2023-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2301.12614$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2301.12614$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Sigurdsson, Gunnar A</creatorcontrib><creatorcontrib>Thomason, Jesse</creatorcontrib><creatorcontrib>Sukhatme, Gaurav S</creatorcontrib><creatorcontrib>Piramuthu, Robinson</creatorcontrib><title>RREx-BoT: Remote Referring Expressions with a Bag of Tricks</title><description>Household robots operate in the same space for years. Such robots
incrementally build dynamic maps that can be used for tasks requiring remote
object localization. However, benchmarks in robot learning often test
generalization through inference on tasks in unobserved environments. In an
observed environment, locating an object is reduced to choosing from among all
object proposals in the environment, which may number in the 100,000s. Armed
with this intuition, using only a generic vision-language scoring model with
minor modifications for 3d encoding and operating in an embodied environment,
we demonstrate an absolute performance gain of 9.84% on remote object grounding
above state of the art models for REVERIE and of 5.04% on FAO. When allowed to
pre-explore an environment, we also exceed the previous state of the art
pre-exploration method on REVERIE. Additionally, we demonstrate our model on a
real-world TurtleBot platform, highlighting the simplicity and usefulness of
the approach. Our analysis outlines a "bag of tricks" essential for
accomplishing this task, from utilizing 3d coordinates and context, to
generalizing vision-language models to large 3d search spaces.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71OwzAURr0woMIDMOEXSLh2bMdpJ1oFqFSpUpQ9suPrYvUnlV1BeHtC6XSkb_h0DiFPDHKhpYQXE8fwlfMCWM64YuKeLJqmHrPl0M5pg8fhghM8xhhOO1qP54gpheGU6He4fFJDl2ZHB0_bGPp9eiB33hwSPt44I-1b3a4-ss32fb163WRGlSKTHJSxjDvFvRUIqKa5cqrvLToNqKGstKiY9ZJLNlkCWCucBNO7UjssZuT5__Zq351jOJr40_1VdNeK4hesl0C8</recordid><startdate>20230129</startdate><enddate>20230129</enddate><creator>Sigurdsson, Gunnar A</creator><creator>Thomason, Jesse</creator><creator>Sukhatme, Gaurav S</creator><creator>Piramuthu, Robinson</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230129</creationdate><title>RREx-BoT: Remote Referring Expressions with a Bag of Tricks</title><author>Sigurdsson, Gunnar A ; Thomason, Jesse ; Sukhatme, Gaurav S ; Piramuthu, Robinson</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-5206ab12d62fb4e0e6a679d6ccbed80e80798491bf525155000bb4d50acd78de3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Sigurdsson, Gunnar A</creatorcontrib><creatorcontrib>Thomason, Jesse</creatorcontrib><creatorcontrib>Sukhatme, Gaurav S</creatorcontrib><creatorcontrib>Piramuthu, Robinson</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sigurdsson, Gunnar A</au><au>Thomason, Jesse</au><au>Sukhatme, Gaurav S</au><au>Piramuthu, Robinson</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>RREx-BoT: Remote Referring Expressions with a Bag of Tricks</atitle><date>2023-01-29</date><risdate>2023</risdate><abstract>Household robots operate in the same space for years. Such robots
incrementally build dynamic maps that can be used for tasks requiring remote
object localization. However, benchmarks in robot learning often test
generalization through inference on tasks in unobserved environments. In an
observed environment, locating an object is reduced to choosing from among all
object proposals in the environment, which may number in the 100,000s. Armed
with this intuition, using only a generic vision-language scoring model with
minor modifications for 3d encoding and operating in an embodied environment,
we demonstrate an absolute performance gain of 9.84% on remote object grounding
above state of the art models for REVERIE and of 5.04% on FAO. When allowed to
pre-explore an environment, we also exceed the previous state of the art
pre-exploration method on REVERIE. Additionally, we demonstrate our model on a
real-world TurtleBot platform, highlighting the simplicity and usefulness of
the approach. Our analysis outlines a "bag of tricks" essential for
accomplishing this task, from utilizing 3d coordinates and context, to
generalizing vision-language models to large 3d search spaces.</abstract><doi>10.48550/arxiv.2301.12614</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2301.12614 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2301_12614 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computer Vision and Pattern Recognition Computer Science - Robotics |
title | RREx-BoT: Remote Referring Expressions with a Bag of Tricks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T10%3A48%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=RREx-BoT:%20Remote%20Referring%20Expressions%20with%20a%20Bag%20of%20Tricks&rft.au=Sigurdsson,%20Gunnar%20A&rft.date=2023-01-29&rft_id=info:doi/10.48550/arxiv.2301.12614&rft_dat=%3Carxiv_GOX%3E2301_12614%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |