3D-VLA: A 3D Vision-Language-Action Generative World Model

Recent vision-language-action (VLA) models rely on 2D inputs, lacking integration with the broader realm of the 3D physical world. Furthermore, they perform action prediction by learning a direct mapping from perception to action, neglecting the vast dynamics of the world and the relations between a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhen, Haoyu, Qiu, Xiaowen, Chen, Peihao, Yang, Jincheng, Yan, Xin, Du, Yilun, Hong, Yining, Gan, Chuang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhen, Haoyu
Qiu, Xiaowen
Chen, Peihao
Yang, Jincheng
Yan, Xin
Du, Yilun
Hong, Yining
Gan, Chuang
description Recent vision-language-action (VLA) models rely on 2D inputs, lacking integration with the broader realm of the 3D physical world. Furthermore, they perform action prediction by learning a direct mapping from perception to action, neglecting the vast dynamics of the world and the relations between actions and dynamics. In contrast, human beings are endowed with world models that depict imagination about future scenarios to plan actions accordingly. To this end, we propose 3D-VLA by introducing a new family of embodied foundation models that seamlessly link 3D perception, reasoning, and action through a generative world model. Specifically, 3D-VLA is built on top of a 3D-based large language model (LLM), and a set of interaction tokens is introduced to engage with the embodied environment. Furthermore, to inject generation abilities into the model, we train a series of embodied diffusion models and align them into the LLM for predicting the goal images and point clouds. To train our 3D-VLA, we curate a large-scale 3D embodied instruction dataset by extracting vast 3D-related information from existing robotics datasets. Our experiments on held-in datasets demonstrate that 3D-VLA significantly improves the reasoning, multimodal generation, and planning capabilities in embodied environments, showcasing its potential in real-world applications.
doi_str_mv 10.48550/arxiv.2403.09631
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2403_09631</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2403_09631</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-9df8342be9a1b26bdde1ab728ceea2ab3a9c6d1ec5cbec205fa39e6c339749223</originalsourceid><addsrcrecordid>eNotj7FuwjAURb0wVMAHdKp_wMH2S5yYLYKWVkrVBcEYPdsvyFJIkKGI_n1b2unoLkf3MPaoZJZXRSEXmG7xmulcQiatAfXAlrAWu6Ze8prDmu_iOY6DaHA4fOKBRO0vP5tvaKCEl3glvh9TH_j7GKifsUmH_Znm_5yy7cvzdvUqmo_N26puBJpSCRu6CnLtyKJy2rgQSKErdeWJUKMDtN4ERb7wjryWRYdgyXgAW-ZWa5iypz_t_Xx7SvGI6av9jWjvEfAN_8RAjw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>3D-VLA: A 3D Vision-Language-Action Generative World Model</title><source>arXiv.org</source><creator>Zhen, Haoyu ; Qiu, Xiaowen ; Chen, Peihao ; Yang, Jincheng ; Yan, Xin ; Du, Yilun ; Hong, Yining ; Gan, Chuang</creator><creatorcontrib>Zhen, Haoyu ; Qiu, Xiaowen ; Chen, Peihao ; Yang, Jincheng ; Yan, Xin ; Du, Yilun ; Hong, Yining ; Gan, Chuang</creatorcontrib><description>Recent vision-language-action (VLA) models rely on 2D inputs, lacking integration with the broader realm of the 3D physical world. Furthermore, they perform action prediction by learning a direct mapping from perception to action, neglecting the vast dynamics of the world and the relations between actions and dynamics. In contrast, human beings are endowed with world models that depict imagination about future scenarios to plan actions accordingly. To this end, we propose 3D-VLA by introducing a new family of embodied foundation models that seamlessly link 3D perception, reasoning, and action through a generative world model. Specifically, 3D-VLA is built on top of a 3D-based large language model (LLM), and a set of interaction tokens is introduced to engage with the embodied environment. Furthermore, to inject generation abilities into the model, we train a series of embodied diffusion models and align them into the LLM for predicting the goal images and point clouds. To train our 3D-VLA, we curate a large-scale 3D embodied instruction dataset by extracting vast 3D-related information from existing robotics datasets. Our experiments on held-in datasets demonstrate that 3D-VLA significantly improves the reasoning, multimodal generation, and planning capabilities in embodied environments, showcasing its potential in real-world applications.</description><identifier>DOI: 10.48550/arxiv.2403.09631</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Robotics</subject><creationdate>2024-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2403.09631$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.09631$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhen, Haoyu</creatorcontrib><creatorcontrib>Qiu, Xiaowen</creatorcontrib><creatorcontrib>Chen, Peihao</creatorcontrib><creatorcontrib>Yang, Jincheng</creatorcontrib><creatorcontrib>Yan, Xin</creatorcontrib><creatorcontrib>Du, Yilun</creatorcontrib><creatorcontrib>Hong, Yining</creatorcontrib><creatorcontrib>Gan, Chuang</creatorcontrib><title>3D-VLA: A 3D Vision-Language-Action Generative World Model</title><description>Recent vision-language-action (VLA) models rely on 2D inputs, lacking integration with the broader realm of the 3D physical world. Furthermore, they perform action prediction by learning a direct mapping from perception to action, neglecting the vast dynamics of the world and the relations between actions and dynamics. In contrast, human beings are endowed with world models that depict imagination about future scenarios to plan actions accordingly. To this end, we propose 3D-VLA by introducing a new family of embodied foundation models that seamlessly link 3D perception, reasoning, and action through a generative world model. Specifically, 3D-VLA is built on top of a 3D-based large language model (LLM), and a set of interaction tokens is introduced to engage with the embodied environment. Furthermore, to inject generation abilities into the model, we train a series of embodied diffusion models and align them into the LLM for predicting the goal images and point clouds. To train our 3D-VLA, we curate a large-scale 3D embodied instruction dataset by extracting vast 3D-related information from existing robotics datasets. Our experiments on held-in datasets demonstrate that 3D-VLA significantly improves the reasoning, multimodal generation, and planning capabilities in embodied environments, showcasing its potential in real-world applications.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj7FuwjAURb0wVMAHdKp_wMH2S5yYLYKWVkrVBcEYPdsvyFJIkKGI_n1b2unoLkf3MPaoZJZXRSEXmG7xmulcQiatAfXAlrAWu6Ze8prDmu_iOY6DaHA4fOKBRO0vP5tvaKCEl3glvh9TH_j7GKifsUmH_Znm_5yy7cvzdvUqmo_N26puBJpSCRu6CnLtyKJy2rgQSKErdeWJUKMDtN4ERb7wjryWRYdgyXgAW-ZWa5iypz_t_Xx7SvGI6av9jWjvEfAN_8RAjw</recordid><startdate>20240314</startdate><enddate>20240314</enddate><creator>Zhen, Haoyu</creator><creator>Qiu, Xiaowen</creator><creator>Chen, Peihao</creator><creator>Yang, Jincheng</creator><creator>Yan, Xin</creator><creator>Du, Yilun</creator><creator>Hong, Yining</creator><creator>Gan, Chuang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240314</creationdate><title>3D-VLA: A 3D Vision-Language-Action Generative World Model</title><author>Zhen, Haoyu ; Qiu, Xiaowen ; Chen, Peihao ; Yang, Jincheng ; Yan, Xin ; Du, Yilun ; Hong, Yining ; Gan, Chuang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-9df8342be9a1b26bdde1ab728ceea2ab3a9c6d1ec5cbec205fa39e6c339749223</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhen, Haoyu</creatorcontrib><creatorcontrib>Qiu, Xiaowen</creatorcontrib><creatorcontrib>Chen, Peihao</creatorcontrib><creatorcontrib>Yang, Jincheng</creatorcontrib><creatorcontrib>Yan, Xin</creatorcontrib><creatorcontrib>Du, Yilun</creatorcontrib><creatorcontrib>Hong, Yining</creatorcontrib><creatorcontrib>Gan, Chuang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhen, Haoyu</au><au>Qiu, Xiaowen</au><au>Chen, Peihao</au><au>Yang, Jincheng</au><au>Yan, Xin</au><au>Du, Yilun</au><au>Hong, Yining</au><au>Gan, Chuang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>3D-VLA: A 3D Vision-Language-Action Generative World Model</atitle><date>2024-03-14</date><risdate>2024</risdate><abstract>Recent vision-language-action (VLA) models rely on 2D inputs, lacking integration with the broader realm of the 3D physical world. Furthermore, they perform action prediction by learning a direct mapping from perception to action, neglecting the vast dynamics of the world and the relations between actions and dynamics. In contrast, human beings are endowed with world models that depict imagination about future scenarios to plan actions accordingly. To this end, we propose 3D-VLA by introducing a new family of embodied foundation models that seamlessly link 3D perception, reasoning, and action through a generative world model. Specifically, 3D-VLA is built on top of a 3D-based large language model (LLM), and a set of interaction tokens is introduced to engage with the embodied environment. Furthermore, to inject generation abilities into the model, we train a series of embodied diffusion models and align them into the LLM for predicting the goal images and point clouds. To train our 3D-VLA, we curate a large-scale 3D embodied instruction dataset by extracting vast 3D-related information from existing robotics datasets. Our experiments on held-in datasets demonstrate that 3D-VLA significantly improves the reasoning, multimodal generation, and planning capabilities in embodied environments, showcasing its potential in real-world applications.</abstract><doi>10.48550/arxiv.2403.09631</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2403.09631
ispartof
issn
language eng
recordid cdi_arxiv_primary_2403_09631
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Robotics
title 3D-VLA: A 3D Vision-Language-Action Generative World Model
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T01%3A25%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=3D-VLA:%20A%203D%20Vision-Language-Action%20Generative%20World%20Model&rft.au=Zhen,%20Haoyu&rft.date=2024-03-14&rft_id=info:doi/10.48550/arxiv.2403.09631&rft_dat=%3Carxiv_GOX%3E2403_09631%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true