ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting
Vision-language models (VLMs) have excelled in multimodal tasks, but adapting them to embodied decision-making in open-world environments presents challenges. One critical issue is bridging the gap between discrete entities in low-level observations and the abstract concepts required for effective p...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-11 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Cai, Shaofei Wang, Zihao Lian, Kewei Mu, Zhancun Ma, Xiaojian Liu, Anji Liang, Yitao |
description | Vision-language models (VLMs) have excelled in multimodal tasks, but adapting them to embodied decision-making in open-world environments presents challenges. One critical issue is bridging the gap between discrete entities in low-level observations and the abstract concepts required for effective planning. A common solution is building hierarchical agents, where VLMs serve as high-level reasoners that break down tasks into executable sub-tasks, typically specified using language. However, language suffers from the inability to communicate detailed spatial information. We propose visual-temporal context prompting, a novel communication protocol between VLMs and policy models. This protocol leverages object segmentation from past observations to guide policy-environment interactions. Using this approach, we train ROCKET-1, a low-level policy that predicts actions based on concatenated visual observations and segmentation masks, supported by real-time object tracking from SAM-2. Our method unlocks the potential of VLMs, enabling them to tackle complex tasks that demand spatial reasoning. Experiments in Minecraft show that our approach enables agents to achieve previously unattainable tasks, with a \(\mathbf{76}\%\) absolute improvement in open-world interaction performance. Codes and demos are now available on the project page: https://craftjarvis.github.io/ROCKET-1. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3120202398</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3120202398</sourcerecordid><originalsourceid>FETCH-proquest_journals_31202023983</originalsourceid><addsrcrecordid>eNqNi9EKwiAARSUIGrV_EHoWnLZavY5FEbEKqcchZeVwauqoz8-HPiDuw4V7zh2AhFCaoWJGyAik3rcYYzJfkDynCTie6nJXMZSt4J77IJzUD1hbodHFOHWDWx03fg3SaPiW4QnP0vdcISY6axxXsDTR-AR4cKazIb4nYHjnyov012MwXVes3CDrzKsXPjSt6Z2OqKEZwTF0WdD_rC8Ssz6J</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3120202398</pqid></control><display><type>article</type><title>ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting</title><source>Free E- Journals</source><creator>Cai, Shaofei ; Wang, Zihao ; Lian, Kewei ; Mu, Zhancun ; Ma, Xiaojian ; Liu, Anji ; Liang, Yitao</creator><creatorcontrib>Cai, Shaofei ; Wang, Zihao ; Lian, Kewei ; Mu, Zhancun ; Ma, Xiaojian ; Liu, Anji ; Liang, Yitao</creatorcontrib><description>Vision-language models (VLMs) have excelled in multimodal tasks, but adapting them to embodied decision-making in open-world environments presents challenges. One critical issue is bridging the gap between discrete entities in low-level observations and the abstract concepts required for effective planning. A common solution is building hierarchical agents, where VLMs serve as high-level reasoners that break down tasks into executable sub-tasks, typically specified using language. However, language suffers from the inability to communicate detailed spatial information. We propose visual-temporal context prompting, a novel communication protocol between VLMs and policy models. This protocol leverages object segmentation from past observations to guide policy-environment interactions. Using this approach, we train ROCKET-1, a low-level policy that predicts actions based on concatenated visual observations and segmentation masks, supported by real-time object tracking from SAM-2. Our method unlocks the potential of VLMs, enabling them to tackle complex tasks that demand spatial reasoning. Experiments in Minecraft show that our approach enables agents to achieve previously unattainable tasks, with a \(\mathbf{76}\%\) absolute improvement in open-world interaction performance. Codes and demos are now available on the project page: https://craftjarvis.github.io/ROCKET-1.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Context ; Decision making ; Image segmentation ; Real time ; Spatial data ; Task complexity ; Visual observation ; Visual tasks</subject><ispartof>arXiv.org, 2024-11</ispartof><rights>2024. This work is published under http://creativecommons.org/publicdomain/zero/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Cai, Shaofei</creatorcontrib><creatorcontrib>Wang, Zihao</creatorcontrib><creatorcontrib>Lian, Kewei</creatorcontrib><creatorcontrib>Mu, Zhancun</creatorcontrib><creatorcontrib>Ma, Xiaojian</creatorcontrib><creatorcontrib>Liu, Anji</creatorcontrib><creatorcontrib>Liang, Yitao</creatorcontrib><title>ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting</title><title>arXiv.org</title><description>Vision-language models (VLMs) have excelled in multimodal tasks, but adapting them to embodied decision-making in open-world environments presents challenges. One critical issue is bridging the gap between discrete entities in low-level observations and the abstract concepts required for effective planning. A common solution is building hierarchical agents, where VLMs serve as high-level reasoners that break down tasks into executable sub-tasks, typically specified using language. However, language suffers from the inability to communicate detailed spatial information. We propose visual-temporal context prompting, a novel communication protocol between VLMs and policy models. This protocol leverages object segmentation from past observations to guide policy-environment interactions. Using this approach, we train ROCKET-1, a low-level policy that predicts actions based on concatenated visual observations and segmentation masks, supported by real-time object tracking from SAM-2. Our method unlocks the potential of VLMs, enabling them to tackle complex tasks that demand spatial reasoning. Experiments in Minecraft show that our approach enables agents to achieve previously unattainable tasks, with a \(\mathbf{76}\%\) absolute improvement in open-world interaction performance. Codes and demos are now available on the project page: https://craftjarvis.github.io/ROCKET-1.</description><subject>Context</subject><subject>Decision making</subject><subject>Image segmentation</subject><subject>Real time</subject><subject>Spatial data</subject><subject>Task complexity</subject><subject>Visual observation</subject><subject>Visual tasks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNi9EKwiAARSUIGrV_EHoWnLZavY5FEbEKqcchZeVwauqoz8-HPiDuw4V7zh2AhFCaoWJGyAik3rcYYzJfkDynCTie6nJXMZSt4J77IJzUD1hbodHFOHWDWx03fg3SaPiW4QnP0vdcISY6axxXsDTR-AR4cKazIb4nYHjnyov012MwXVes3CDrzKsXPjSt6Z2OqKEZwTF0WdD_rC8Ssz6J</recordid><startdate>20241114</startdate><enddate>20241114</enddate><creator>Cai, Shaofei</creator><creator>Wang, Zihao</creator><creator>Lian, Kewei</creator><creator>Mu, Zhancun</creator><creator>Ma, Xiaojian</creator><creator>Liu, Anji</creator><creator>Liang, Yitao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241114</creationdate><title>ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting</title><author>Cai, Shaofei ; Wang, Zihao ; Lian, Kewei ; Mu, Zhancun ; Ma, Xiaojian ; Liu, Anji ; Liang, Yitao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31202023983</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Context</topic><topic>Decision making</topic><topic>Image segmentation</topic><topic>Real time</topic><topic>Spatial data</topic><topic>Task complexity</topic><topic>Visual observation</topic><topic>Visual tasks</topic><toplevel>online_resources</toplevel><creatorcontrib>Cai, Shaofei</creatorcontrib><creatorcontrib>Wang, Zihao</creatorcontrib><creatorcontrib>Lian, Kewei</creatorcontrib><creatorcontrib>Mu, Zhancun</creatorcontrib><creatorcontrib>Ma, Xiaojian</creatorcontrib><creatorcontrib>Liu, Anji</creatorcontrib><creatorcontrib>Liang, Yitao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cai, Shaofei</au><au>Wang, Zihao</au><au>Lian, Kewei</au><au>Mu, Zhancun</au><au>Ma, Xiaojian</au><au>Liu, Anji</au><au>Liang, Yitao</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting</atitle><jtitle>arXiv.org</jtitle><date>2024-11-14</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Vision-language models (VLMs) have excelled in multimodal tasks, but adapting them to embodied decision-making in open-world environments presents challenges. One critical issue is bridging the gap between discrete entities in low-level observations and the abstract concepts required for effective planning. A common solution is building hierarchical agents, where VLMs serve as high-level reasoners that break down tasks into executable sub-tasks, typically specified using language. However, language suffers from the inability to communicate detailed spatial information. We propose visual-temporal context prompting, a novel communication protocol between VLMs and policy models. This protocol leverages object segmentation from past observations to guide policy-environment interactions. Using this approach, we train ROCKET-1, a low-level policy that predicts actions based on concatenated visual observations and segmentation masks, supported by real-time object tracking from SAM-2. Our method unlocks the potential of VLMs, enabling them to tackle complex tasks that demand spatial reasoning. Experiments in Minecraft show that our approach enables agents to achieve previously unattainable tasks, with a \(\mathbf{76}\%\) absolute improvement in open-world interaction performance. Codes and demos are now available on the project page: https://craftjarvis.github.io/ROCKET-1.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3120202398 |
source | Free E- Journals |
subjects | Context Decision making Image segmentation Real time Spatial data Task complexity Visual observation Visual tasks |
title | ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T03%3A57%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=ROCKET-1:%20Mastering%20Open-World%20Interaction%20with%20Visual-Temporal%20Context%20Prompting&rft.jtitle=arXiv.org&rft.au=Cai,%20Shaofei&rft.date=2024-11-14&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3120202398%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3120202398&rft_id=info:pmid/&rfr_iscdi=true |