Talk2Nav: Long-Range Vision-and-Language Navigation with Dual Attention and Spatial Memory
The role of robots in society keeps expanding, bringing with it the necessity of interacting and communicating with humans. In order to keep such interaction intuitive, we provide automatic wayfinding based on verbal navigational instructions. Our first contribution is the creation of a large-scale...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2020-10 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Arun Balajee Vasudevan Dai, Dengxin Luc Van Gool |
description | The role of robots in society keeps expanding, bringing with it the necessity of interacting and communicating with humans. In order to keep such interaction intuitive, we provide automatic wayfinding based on verbal navigational instructions. Our first contribution is the creation of a large-scale dataset with verbal navigation instructions. To this end, we have developed an interactive visual navigation environment based on Google Street View; we further design an annotation method to highlight mined anchor landmarks and local directions between them in order to help annotators formulate typical, human references to those. The annotation task was crowdsourced on the AMT platform, to construct a new Talk2Nav dataset with \(10,714\) routes. Our second contribution is a new learning method. Inspired by spatial cognition research on the mental conceptualization of navigational instructions, we introduce a soft dual attention mechanism defined over the segmented language instructions to jointly extract two partial instructions -- one for matching the next upcoming visual landmark and the other for matching the local directions to the next landmark. On the similar lines, we also introduce spatial memory scheme to encode the local directional transitions. Our work takes advantage of the advance in two lines of research: mental formalization of verbal navigational instructions and training neural network agents for automatic way finding. Extensive experiments show that our method significantly outperforms previous navigation methods. For demo video, dataset and code, please refer to our project page: https://www.trace.ethz.ch/publications/2019/talk2nav/index.html |
doi_str_mv | 10.48550/arxiv.1910.02029 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_1910_02029</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2301666669</sourcerecordid><originalsourceid>FETCH-LOGICAL-a529-c82220be91de71e12b80d602248ddf3b4960e81dad0f6aeb721ae8acfffa5b043</originalsourceid><addsrcrecordid>eNotkE1Lw0AQhhdBsNT-AE8GPG-dnc3HxlupWoWooMGDlzDpbmJqmsR8VPvv3abOZeCZh2HmZexCwNxVngfX1P4Wu7kILQAEDE_YBKUUXLmIZ2zWdRsAQD9Az5MT9hFT-YXPtLtxorrK-StVuXHei66oK06V5pEFA1lmnSKn3nLnp-g_nduBSmfR96YamXWdt8bOLX0y27rdn7PTjMrOzP77lMX3d_HygUcvq8flIuLkYcjXChEhNaHQJhBGYKpA-4DoKq0zmbqhD0YJTRoyn0waoCCjaJ1lGXkpuHLKLo9rx8eTpi221O6TQwDJGIA1ro5G09bfg-n6ZFMPbWVvSlCC8A8Vyj_ulF4i</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2301666669</pqid></control><display><type>article</type><title>Talk2Nav: Long-Range Vision-and-Language Navigation with Dual Attention and Spatial Memory</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Arun Balajee Vasudevan ; Dai, Dengxin ; Luc Van Gool</creator><creatorcontrib>Arun Balajee Vasudevan ; Dai, Dengxin ; Luc Van Gool</creatorcontrib><description>The role of robots in society keeps expanding, bringing with it the necessity of interacting and communicating with humans. In order to keep such interaction intuitive, we provide automatic wayfinding based on verbal navigational instructions. Our first contribution is the creation of a large-scale dataset with verbal navigation instructions. To this end, we have developed an interactive visual navigation environment based on Google Street View; we further design an annotation method to highlight mined anchor landmarks and local directions between them in order to help annotators formulate typical, human references to those. The annotation task was crowdsourced on the AMT platform, to construct a new Talk2Nav dataset with \(10,714\) routes. Our second contribution is a new learning method. Inspired by spatial cognition research on the mental conceptualization of navigational instructions, we introduce a soft dual attention mechanism defined over the segmented language instructions to jointly extract two partial instructions -- one for matching the next upcoming visual landmark and the other for matching the local directions to the next landmark. On the similar lines, we also introduce spatial memory scheme to encode the local directional transitions. Our work takes advantage of the advance in two lines of research: mental formalization of verbal navigational instructions and training neural network agents for automatic way finding. Extensive experiments show that our method significantly outperforms previous navigation methods. For demo video, dataset and code, please refer to our project page: https://www.trace.ethz.ch/publications/2019/talk2nav/index.html</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.1910.02029</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Anchors ; Annotations ; Autonomous navigation ; Cognition ; Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Robotics ; Datasets ; Landmarks ; Matching ; Neural networks ; Wayfinding</subject><ispartof>arXiv.org, 2020-10</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27904</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.1910.02029$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1007/s11263-020-01374-3$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Arun Balajee Vasudevan</creatorcontrib><creatorcontrib>Dai, Dengxin</creatorcontrib><creatorcontrib>Luc Van Gool</creatorcontrib><title>Talk2Nav: Long-Range Vision-and-Language Navigation with Dual Attention and Spatial Memory</title><title>arXiv.org</title><description>The role of robots in society keeps expanding, bringing with it the necessity of interacting and communicating with humans. In order to keep such interaction intuitive, we provide automatic wayfinding based on verbal navigational instructions. Our first contribution is the creation of a large-scale dataset with verbal navigation instructions. To this end, we have developed an interactive visual navigation environment based on Google Street View; we further design an annotation method to highlight mined anchor landmarks and local directions between them in order to help annotators formulate typical, human references to those. The annotation task was crowdsourced on the AMT platform, to construct a new Talk2Nav dataset with \(10,714\) routes. Our second contribution is a new learning method. Inspired by spatial cognition research on the mental conceptualization of navigational instructions, we introduce a soft dual attention mechanism defined over the segmented language instructions to jointly extract two partial instructions -- one for matching the next upcoming visual landmark and the other for matching the local directions to the next landmark. On the similar lines, we also introduce spatial memory scheme to encode the local directional transitions. Our work takes advantage of the advance in two lines of research: mental formalization of verbal navigational instructions and training neural network agents for automatic way finding. Extensive experiments show that our method significantly outperforms previous navigation methods. For demo video, dataset and code, please refer to our project page: https://www.trace.ethz.ch/publications/2019/talk2nav/index.html</description><subject>Anchors</subject><subject>Annotations</subject><subject>Autonomous navigation</subject><subject>Cognition</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Robotics</subject><subject>Datasets</subject><subject>Landmarks</subject><subject>Matching</subject><subject>Neural networks</subject><subject>Wayfinding</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotkE1Lw0AQhhdBsNT-AE8GPG-dnc3HxlupWoWooMGDlzDpbmJqmsR8VPvv3abOZeCZh2HmZexCwNxVngfX1P4Wu7kILQAEDE_YBKUUXLmIZ2zWdRsAQD9Az5MT9hFT-YXPtLtxorrK-StVuXHei66oK06V5pEFA1lmnSKn3nLnp-g_nduBSmfR96YamXWdt8bOLX0y27rdn7PTjMrOzP77lMX3d_HygUcvq8flIuLkYcjXChEhNaHQJhBGYKpA-4DoKq0zmbqhD0YJTRoyn0waoCCjaJ1lGXkpuHLKLo9rx8eTpi221O6TQwDJGIA1ro5G09bfg-n6ZFMPbWVvSlCC8A8Vyj_ulF4i</recordid><startdate>20201022</startdate><enddate>20201022</enddate><creator>Arun Balajee Vasudevan</creator><creator>Dai, Dengxin</creator><creator>Luc Van Gool</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20201022</creationdate><title>Talk2Nav: Long-Range Vision-and-Language Navigation with Dual Attention and Spatial Memory</title><author>Arun Balajee Vasudevan ; Dai, Dengxin ; Luc Van Gool</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a529-c82220be91de71e12b80d602248ddf3b4960e81dad0f6aeb721ae8acfffa5b043</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Anchors</topic><topic>Annotations</topic><topic>Autonomous navigation</topic><topic>Cognition</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Robotics</topic><topic>Datasets</topic><topic>Landmarks</topic><topic>Matching</topic><topic>Neural networks</topic><topic>Wayfinding</topic><toplevel>online_resources</toplevel><creatorcontrib>Arun Balajee Vasudevan</creatorcontrib><creatorcontrib>Dai, Dengxin</creatorcontrib><creatorcontrib>Luc Van Gool</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Arun Balajee Vasudevan</au><au>Dai, Dengxin</au><au>Luc Van Gool</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Talk2Nav: Long-Range Vision-and-Language Navigation with Dual Attention and Spatial Memory</atitle><jtitle>arXiv.org</jtitle><date>2020-10-22</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>The role of robots in society keeps expanding, bringing with it the necessity of interacting and communicating with humans. In order to keep such interaction intuitive, we provide automatic wayfinding based on verbal navigational instructions. Our first contribution is the creation of a large-scale dataset with verbal navigation instructions. To this end, we have developed an interactive visual navigation environment based on Google Street View; we further design an annotation method to highlight mined anchor landmarks and local directions between them in order to help annotators formulate typical, human references to those. The annotation task was crowdsourced on the AMT platform, to construct a new Talk2Nav dataset with \(10,714\) routes. Our second contribution is a new learning method. Inspired by spatial cognition research on the mental conceptualization of navigational instructions, we introduce a soft dual attention mechanism defined over the segmented language instructions to jointly extract two partial instructions -- one for matching the next upcoming visual landmark and the other for matching the local directions to the next landmark. On the similar lines, we also introduce spatial memory scheme to encode the local directional transitions. Our work takes advantage of the advance in two lines of research: mental formalization of verbal navigational instructions and training neural network agents for automatic way finding. Extensive experiments show that our method significantly outperforms previous navigation methods. For demo video, dataset and code, please refer to our project page: https://www.trace.ethz.ch/publications/2019/talk2nav/index.html</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.1910.02029</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2020-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_1910_02029 |
source | arXiv.org; Free E- Journals |
subjects | Anchors Annotations Autonomous navigation Cognition Computer Science - Computation and Language Computer Science - Computer Vision and Pattern Recognition Computer Science - Robotics Datasets Landmarks Matching Neural networks Wayfinding |
title | Talk2Nav: Long-Range Vision-and-Language Navigation with Dual Attention and Spatial Memory |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T10%3A53%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Talk2Nav:%20Long-Range%20Vision-and-Language%20Navigation%20with%20Dual%20Attention%20and%20Spatial%20Memory&rft.jtitle=arXiv.org&rft.au=Arun%20Balajee%20Vasudevan&rft.date=2020-10-22&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.1910.02029&rft_dat=%3Cproquest_arxiv%3E2301666669%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2301666669&rft_id=info:pmid/&rfr_iscdi=true |