NaVid: Video-based VLM Plans the Next Step for Vision-and-Language Navigation

Vision-and-language navigation (VLN) stands as a key research problem of Embodied AI, aiming at enabling agents to navigate in unseen environments following linguistic instructions. In this field, generalization is a long-standing challenge, either to out-of-distribution scenes or from Sim to Real....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhang, Jiazhao, Wang, Kunyu, Xu, Rongtao, Zhou, Gengze, Hong, Yicong, Fang, Xiaomeng, Wu, Qi, Zhang, Zhizheng, Wang, He
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhang, Jiazhao
Wang, Kunyu
Xu, Rongtao
Zhou, Gengze
Hong, Yicong
Fang, Xiaomeng
Wu, Qi
Zhang, Zhizheng
Wang, He
description Vision-and-language navigation (VLN) stands as a key research problem of Embodied AI, aiming at enabling agents to navigate in unseen environments following linguistic instructions. In this field, generalization is a long-standing challenge, either to out-of-distribution scenes or from Sim to Real. In this paper, we propose NaVid, a video-based large vision language model (VLM), to mitigate such a generalization gap. NaVid makes the first endeavor to showcase the capability of VLMs to achieve state-of-the-art level navigation performance without any maps, odometers, or depth inputs. Following human instruction, NaVid only requires an on-the-fly video stream from a monocular RGB camera equipped on the robot to output the next-step action. Our formulation mimics how humans navigate and naturally gets rid of the problems introduced by odometer noises, and the Sim2Real gaps from map or depth inputs. Moreover, our video-based approach can effectively encode the historical observations of robots as spatio-temporal contexts for decision making and instruction following. We train NaVid with 510k navigation samples collected from continuous environments, including action-planning and instruction-reasoning samples, along with 763k large-scale web data. Extensive experiments show that NaVid achieves state-of-the-art performance in simulation environments and the real world, demonstrating superior cross-dataset and Sim2Real transfer. We thus believe our proposed VLM approach plans the next step for not only the navigation agents but also this research field.
doi_str_mv 10.48550/arxiv.2402.15852
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_15852</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_15852</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-7e372f632e3c415fbe7cbacf1ee701a6d8d8a62d94289b9176043bbae150c70d3</originalsourceid><addsrcrecordid>eNotj8tKw0AYhWfjQqoP4Mp5gYlzn4k7Kd4grYKl2_BP5p84UJOSxFLf3ti6OQcOHwc-Qm4EL7Q3ht_BcMyHQmouC2G8kZdktYZtjvd0DuxZgBEj3VYr-r6DbqTTJ9I1Hif6MeGepn6YuTH3HYMusgq69hvamYBDbmGa9ytykWA34vV_L8jm6XGzfGHV2_Pr8qFiYJ1kDpWTySqJqtHCpICuCdAkgei4ABt99GBlLLX0ZSiFs1yrEACF4Y3jUS3I7fn25FPvh_wFw0_951WfvNQvSWRHYA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>NaVid: Video-based VLM Plans the Next Step for Vision-and-Language Navigation</title><source>arXiv.org</source><creator>Zhang, Jiazhao ; Wang, Kunyu ; Xu, Rongtao ; Zhou, Gengze ; Hong, Yicong ; Fang, Xiaomeng ; Wu, Qi ; Zhang, Zhizheng ; Wang, He</creator><creatorcontrib>Zhang, Jiazhao ; Wang, Kunyu ; Xu, Rongtao ; Zhou, Gengze ; Hong, Yicong ; Fang, Xiaomeng ; Wu, Qi ; Zhang, Zhizheng ; Wang, He</creatorcontrib><description>Vision-and-language navigation (VLN) stands as a key research problem of Embodied AI, aiming at enabling agents to navigate in unseen environments following linguistic instructions. In this field, generalization is a long-standing challenge, either to out-of-distribution scenes or from Sim to Real. In this paper, we propose NaVid, a video-based large vision language model (VLM), to mitigate such a generalization gap. NaVid makes the first endeavor to showcase the capability of VLMs to achieve state-of-the-art level navigation performance without any maps, odometers, or depth inputs. Following human instruction, NaVid only requires an on-the-fly video stream from a monocular RGB camera equipped on the robot to output the next-step action. Our formulation mimics how humans navigate and naturally gets rid of the problems introduced by odometer noises, and the Sim2Real gaps from map or depth inputs. Moreover, our video-based approach can effectively encode the historical observations of robots as spatio-temporal contexts for decision making and instruction following. We train NaVid with 510k navigation samples collected from continuous environments, including action-planning and instruction-reasoning samples, along with 763k large-scale web data. Extensive experiments show that NaVid achieves state-of-the-art performance in simulation environments and the real world, demonstrating superior cross-dataset and Sim2Real transfer. We thus believe our proposed VLM approach plans the next step for not only the navigation agents but also this research field.</description><identifier>DOI: 10.48550/arxiv.2402.15852</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Robotics</subject><creationdate>2024-02</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.15852$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.15852$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Jiazhao</creatorcontrib><creatorcontrib>Wang, Kunyu</creatorcontrib><creatorcontrib>Xu, Rongtao</creatorcontrib><creatorcontrib>Zhou, Gengze</creatorcontrib><creatorcontrib>Hong, Yicong</creatorcontrib><creatorcontrib>Fang, Xiaomeng</creatorcontrib><creatorcontrib>Wu, Qi</creatorcontrib><creatorcontrib>Zhang, Zhizheng</creatorcontrib><creatorcontrib>Wang, He</creatorcontrib><title>NaVid: Video-based VLM Plans the Next Step for Vision-and-Language Navigation</title><description>Vision-and-language navigation (VLN) stands as a key research problem of Embodied AI, aiming at enabling agents to navigate in unseen environments following linguistic instructions. In this field, generalization is a long-standing challenge, either to out-of-distribution scenes or from Sim to Real. In this paper, we propose NaVid, a video-based large vision language model (VLM), to mitigate such a generalization gap. NaVid makes the first endeavor to showcase the capability of VLMs to achieve state-of-the-art level navigation performance without any maps, odometers, or depth inputs. Following human instruction, NaVid only requires an on-the-fly video stream from a monocular RGB camera equipped on the robot to output the next-step action. Our formulation mimics how humans navigate and naturally gets rid of the problems introduced by odometer noises, and the Sim2Real gaps from map or depth inputs. Moreover, our video-based approach can effectively encode the historical observations of robots as spatio-temporal contexts for decision making and instruction following. We train NaVid with 510k navigation samples collected from continuous environments, including action-planning and instruction-reasoning samples, along with 763k large-scale web data. Extensive experiments show that NaVid achieves state-of-the-art performance in simulation environments and the real world, demonstrating superior cross-dataset and Sim2Real transfer. We thus believe our proposed VLM approach plans the next step for not only the navigation agents but also this research field.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tKw0AYhWfjQqoP4Mp5gYlzn4k7Kd4grYKl2_BP5p84UJOSxFLf3ti6OQcOHwc-Qm4EL7Q3ht_BcMyHQmouC2G8kZdktYZtjvd0DuxZgBEj3VYr-r6DbqTTJ9I1Hif6MeGepn6YuTH3HYMusgq69hvamYBDbmGa9ytykWA34vV_L8jm6XGzfGHV2_Pr8qFiYJ1kDpWTySqJqtHCpICuCdAkgei4ABt99GBlLLX0ZSiFs1yrEACF4Y3jUS3I7fn25FPvh_wFw0_951WfvNQvSWRHYA</recordid><startdate>20240224</startdate><enddate>20240224</enddate><creator>Zhang, Jiazhao</creator><creator>Wang, Kunyu</creator><creator>Xu, Rongtao</creator><creator>Zhou, Gengze</creator><creator>Hong, Yicong</creator><creator>Fang, Xiaomeng</creator><creator>Wu, Qi</creator><creator>Zhang, Zhizheng</creator><creator>Wang, He</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240224</creationdate><title>NaVid: Video-based VLM Plans the Next Step for Vision-and-Language Navigation</title><author>Zhang, Jiazhao ; Wang, Kunyu ; Xu, Rongtao ; Zhou, Gengze ; Hong, Yicong ; Fang, Xiaomeng ; Wu, Qi ; Zhang, Zhizheng ; Wang, He</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-7e372f632e3c415fbe7cbacf1ee701a6d8d8a62d94289b9176043bbae150c70d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Jiazhao</creatorcontrib><creatorcontrib>Wang, Kunyu</creatorcontrib><creatorcontrib>Xu, Rongtao</creatorcontrib><creatorcontrib>Zhou, Gengze</creatorcontrib><creatorcontrib>Hong, Yicong</creatorcontrib><creatorcontrib>Fang, Xiaomeng</creatorcontrib><creatorcontrib>Wu, Qi</creatorcontrib><creatorcontrib>Zhang, Zhizheng</creatorcontrib><creatorcontrib>Wang, He</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Jiazhao</au><au>Wang, Kunyu</au><au>Xu, Rongtao</au><au>Zhou, Gengze</au><au>Hong, Yicong</au><au>Fang, Xiaomeng</au><au>Wu, Qi</au><au>Zhang, Zhizheng</au><au>Wang, He</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>NaVid: Video-based VLM Plans the Next Step for Vision-and-Language Navigation</atitle><date>2024-02-24</date><risdate>2024</risdate><abstract>Vision-and-language navigation (VLN) stands as a key research problem of Embodied AI, aiming at enabling agents to navigate in unseen environments following linguistic instructions. In this field, generalization is a long-standing challenge, either to out-of-distribution scenes or from Sim to Real. In this paper, we propose NaVid, a video-based large vision language model (VLM), to mitigate such a generalization gap. NaVid makes the first endeavor to showcase the capability of VLMs to achieve state-of-the-art level navigation performance without any maps, odometers, or depth inputs. Following human instruction, NaVid only requires an on-the-fly video stream from a monocular RGB camera equipped on the robot to output the next-step action. Our formulation mimics how humans navigate and naturally gets rid of the problems introduced by odometer noises, and the Sim2Real gaps from map or depth inputs. Moreover, our video-based approach can effectively encode the historical observations of robots as spatio-temporal contexts for decision making and instruction following. We train NaVid with 510k navigation samples collected from continuous environments, including action-planning and instruction-reasoning samples, along with 763k large-scale web data. Extensive experiments show that NaVid achieves state-of-the-art performance in simulation environments and the real world, demonstrating superior cross-dataset and Sim2Real transfer. We thus believe our proposed VLM approach plans the next step for not only the navigation agents but also this research field.</abstract><doi>10.48550/arxiv.2402.15852</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2402.15852
ispartof
issn
language eng
recordid cdi_arxiv_primary_2402_15852
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Robotics
title NaVid: Video-based VLM Plans the Next Step for Vision-and-Language Navigation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T13%3A55%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=NaVid:%20Video-based%20VLM%20Plans%20the%20Next%20Step%20for%20Vision-and-Language%20Navigation&rft.au=Zhang,%20Jiazhao&rft.date=2024-02-24&rft_id=info:doi/10.48550/arxiv.2402.15852&rft_dat=%3Carxiv_GOX%3E2402_15852%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true