Visual Semantic Navigation with Real Robots
Visual Semantic Navigation (VSN) is the ability of a robot to learn visual semantic information for navigating in unseen environments. These VSN models are typically tested in those virtual environments where they are trained, mainly using reinforcement learning based approaches. Therefore, we do no...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-11 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Gutiérrez-Álvarez, Carlos Ríos-Navarro, Pablo Flor-Rodríguez, Rafael Acevedo-Rodríguez, Francisco Javier López-Sastre, Roberto J |
description | Visual Semantic Navigation (VSN) is the ability of a robot to learn visual semantic information for navigating in unseen environments. These VSN models are typically tested in those virtual environments where they are trained, mainly using reinforcement learning based approaches. Therefore, we do not yet have an in-depth analysis of how these models would behave in the real world. In this work, we propose a new solution to integrate VSN models into real robots, so that we have true embodied agents. We also release a novel ROS-based framework for VSN, ROS4VSN, so that any VSN-model can be easily deployed in any ROS-compatible robot and tested in a real setting. Our experiments with two different robots, where we have embedded two state-of-the-art VSN agents, confirm that there is a noticeable performance difference of these VSN solutions when tested in real-world and simulation environments. We hope that this research will endeavor to provide a foundation for addressing this consequential issue, with the ultimate aim of advancing the performance and efficiency of embodied agents within authentic real-world scenarios. Code to reproduce all our experiments can be found at https://github.com/gramuah/ros4vsn. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2895041161</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2895041161</sourcerecordid><originalsourceid>FETCH-proquest_journals_28950411613</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mTQDsssLk3MUQhOzU3MK8lMVvBLLMtMTyzJzM9TKM8syVAISgXKBuUn5ZcU8zCwpiXmFKfyQmluBmU31xBnD92CovzC0tTikvis_NKiPKBUvJGFpamBiaGhmaExcaoAPLsxCw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2895041161</pqid></control><display><type>article</type><title>Visual Semantic Navigation with Real Robots</title><source>Free E- Journals</source><creator>Gutiérrez-Álvarez, Carlos ; Ríos-Navarro, Pablo ; Flor-Rodríguez, Rafael ; Acevedo-Rodríguez, Francisco Javier ; López-Sastre, Roberto J</creator><creatorcontrib>Gutiérrez-Álvarez, Carlos ; Ríos-Navarro, Pablo ; Flor-Rodríguez, Rafael ; Acevedo-Rodríguez, Francisco Javier ; López-Sastre, Roberto J</creatorcontrib><description>Visual Semantic Navigation (VSN) is the ability of a robot to learn visual semantic information for navigating in unseen environments. These VSN models are typically tested in those virtual environments where they are trained, mainly using reinforcement learning based approaches. Therefore, we do not yet have an in-depth analysis of how these models would behave in the real world. In this work, we propose a new solution to integrate VSN models into real robots, so that we have true embodied agents. We also release a novel ROS-based framework for VSN, ROS4VSN, so that any VSN-model can be easily deployed in any ROS-compatible robot and tested in a real setting. Our experiments with two different robots, where we have embedded two state-of-the-art VSN agents, confirm that there is a noticeable performance difference of these VSN solutions when tested in real-world and simulation environments. We hope that this research will endeavor to provide a foundation for addressing this consequential issue, with the ultimate aim of advancing the performance and efficiency of embodied agents within authentic real-world scenarios. Code to reproduce all our experiments can be found at https://github.com/gramuah/ros4vsn.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Environment models ; Navigation ; Robots ; Semantics ; Virtual environments</subject><ispartof>arXiv.org, 2023-11</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Gutiérrez-Álvarez, Carlos</creatorcontrib><creatorcontrib>Ríos-Navarro, Pablo</creatorcontrib><creatorcontrib>Flor-Rodríguez, Rafael</creatorcontrib><creatorcontrib>Acevedo-Rodríguez, Francisco Javier</creatorcontrib><creatorcontrib>López-Sastre, Roberto J</creatorcontrib><title>Visual Semantic Navigation with Real Robots</title><title>arXiv.org</title><description>Visual Semantic Navigation (VSN) is the ability of a robot to learn visual semantic information for navigating in unseen environments. These VSN models are typically tested in those virtual environments where they are trained, mainly using reinforcement learning based approaches. Therefore, we do not yet have an in-depth analysis of how these models would behave in the real world. In this work, we propose a new solution to integrate VSN models into real robots, so that we have true embodied agents. We also release a novel ROS-based framework for VSN, ROS4VSN, so that any VSN-model can be easily deployed in any ROS-compatible robot and tested in a real setting. Our experiments with two different robots, where we have embedded two state-of-the-art VSN agents, confirm that there is a noticeable performance difference of these VSN solutions when tested in real-world and simulation environments. We hope that this research will endeavor to provide a foundation for addressing this consequential issue, with the ultimate aim of advancing the performance and efficiency of embodied agents within authentic real-world scenarios. Code to reproduce all our experiments can be found at https://github.com/gramuah/ros4vsn.</description><subject>Environment models</subject><subject>Navigation</subject><subject>Robots</subject><subject>Semantics</subject><subject>Virtual environments</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mTQDsssLk3MUQhOzU3MK8lMVvBLLMtMTyzJzM9TKM8syVAISgXKBuUn5ZcU8zCwpiXmFKfyQmluBmU31xBnD92CovzC0tTikvis_NKiPKBUvJGFpamBiaGhmaExcaoAPLsxCw</recordid><startdate>20231128</startdate><enddate>20231128</enddate><creator>Gutiérrez-Álvarez, Carlos</creator><creator>Ríos-Navarro, Pablo</creator><creator>Flor-Rodríguez, Rafael</creator><creator>Acevedo-Rodríguez, Francisco Javier</creator><creator>López-Sastre, Roberto J</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PIMPY</scope><scope>PKEHL</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231128</creationdate><title>Visual Semantic Navigation with Real Robots</title><author>Gutiérrez-Álvarez, Carlos ; Ríos-Navarro, Pablo ; Flor-Rodríguez, Rafael ; Acevedo-Rodríguez, Francisco Javier ; López-Sastre, Roberto J</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28950411613</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Environment models</topic><topic>Navigation</topic><topic>Robots</topic><topic>Semantics</topic><topic>Virtual environments</topic><toplevel>online_resources</toplevel><creatorcontrib>Gutiérrez-Álvarez, Carlos</creatorcontrib><creatorcontrib>Ríos-Navarro, Pablo</creatorcontrib><creatorcontrib>Flor-Rodríguez, Rafael</creatorcontrib><creatorcontrib>Acevedo-Rodríguez, Francisco Javier</creatorcontrib><creatorcontrib>López-Sastre, Roberto J</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied & Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gutiérrez-Álvarez, Carlos</au><au>Ríos-Navarro, Pablo</au><au>Flor-Rodríguez, Rafael</au><au>Acevedo-Rodríguez, Francisco Javier</au><au>López-Sastre, Roberto J</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Visual Semantic Navigation with Real Robots</atitle><jtitle>arXiv.org</jtitle><date>2023-11-28</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Visual Semantic Navigation (VSN) is the ability of a robot to learn visual semantic information for navigating in unseen environments. These VSN models are typically tested in those virtual environments where they are trained, mainly using reinforcement learning based approaches. Therefore, we do not yet have an in-depth analysis of how these models would behave in the real world. In this work, we propose a new solution to integrate VSN models into real robots, so that we have true embodied agents. We also release a novel ROS-based framework for VSN, ROS4VSN, so that any VSN-model can be easily deployed in any ROS-compatible robot and tested in a real setting. Our experiments with two different robots, where we have embedded two state-of-the-art VSN agents, confirm that there is a noticeable performance difference of these VSN solutions when tested in real-world and simulation environments. We hope that this research will endeavor to provide a foundation for addressing this consequential issue, with the ultimate aim of advancing the performance and efficiency of embodied agents within authentic real-world scenarios. Code to reproduce all our experiments can be found at https://github.com/gramuah/ros4vsn.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2895041161 |
source | Free E- Journals |
subjects | Environment models Navigation Robots Semantics Virtual environments |
title | Visual Semantic Navigation with Real Robots |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-18T05%3A25%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Visual%20Semantic%20Navigation%20with%20Real%20Robots&rft.jtitle=arXiv.org&rft.au=Guti%C3%A9rrez-%C3%81lvarez,%20Carlos&rft.date=2023-11-28&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2895041161%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2895041161&rft_id=info:pmid/&rfr_iscdi=true |