Retouchdown: Adding Touchdown to StreetLearn as a Shareable Resource for Language Grounding Tasks in Street View

The Touchdown dataset (Chen et al., 2019) provides instructions by human annotators for navigation through New York City streets and for resolving spatial descriptions at a given location. To enable the wider research community to work effectively with the Touchdown tasks, we are publicly releasing...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Mehta, Harsh, Artzi, Yoav, Baldridge, Jason, Ie, Eugene, Mirowski, Piotr
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Mehta, Harsh
Artzi, Yoav
Baldridge, Jason
Ie, Eugene
Mirowski, Piotr
description The Touchdown dataset (Chen et al., 2019) provides instructions by human annotators for navigation through New York City streets and for resolving spatial descriptions at a given location. To enable the wider research community to work effectively with the Touchdown tasks, we are publicly releasing the 29k raw Street View panoramas needed for Touchdown. We follow the process used for the StreetLearn data release (Mirowski et al., 2019) to check panoramas for personally identifiable information and blur them as necessary. These have been added to the StreetLearn dataset and can be obtained via the same process as used previously for StreetLearn. We also provide a reference implementation for both of the Touchdown tasks: vision and language navigation (VLN) and spatial description resolution (SDR). We compare our model results to those given in Chen et al. (2019) and show that the panoramas we have added to StreetLearn fully support both Touchdown tasks and can be used effectively for further research and comparison.
doi_str_mv 10.48550/arxiv.2001.03671
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2001_03671</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2001_03671</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-f3a466ede7939a40128c63de0cb70069684250a2ff8c0c5d501523bc325302513</originalsourceid><addsrcrecordid>eNo1j8FOg0AURWfjwlQ_wJXvB6hvZpgB3DWNVhOSJi1xSx7DgxIr0wxg9e_Vtq5uchcn5whxJ3Eep8bgA4Wv7nOuEOUctU3ktThsePST29X-2D_Coq67voXi_4HRw3YMzGPOFHqgAQi2OwpM1Z5hw4OfgmNofICc-nailmEV_NSfOTS8D9D1Fwa8dXy8EVcN7Qe-vexMFM9PxfIlyter1-Uij-jXK2o0xdZyzUmmM4pRqtRZXTO6KkG0mU1jZZBU06QOnakNSqN05bQyGpWReibuz9hTcnkI3QeF7_IvvTyl6x9rM1Pc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Retouchdown: Adding Touchdown to StreetLearn as a Shareable Resource for Language Grounding Tasks in Street View</title><source>arXiv.org</source><creator>Mehta, Harsh ; Artzi, Yoav ; Baldridge, Jason ; Ie, Eugene ; Mirowski, Piotr</creator><creatorcontrib>Mehta, Harsh ; Artzi, Yoav ; Baldridge, Jason ; Ie, Eugene ; Mirowski, Piotr</creatorcontrib><description>The Touchdown dataset (Chen et al., 2019) provides instructions by human annotators for navigation through New York City streets and for resolving spatial descriptions at a given location. To enable the wider research community to work effectively with the Touchdown tasks, we are publicly releasing the 29k raw Street View panoramas needed for Touchdown. We follow the process used for the StreetLearn data release (Mirowski et al., 2019) to check panoramas for personally identifiable information and blur them as necessary. These have been added to the StreetLearn dataset and can be obtained via the same process as used previously for StreetLearn. We also provide a reference implementation for both of the Touchdown tasks: vision and language navigation (VLN) and spatial description resolution (SDR). We compare our model results to those given in Chen et al. (2019) and show that the panoramas we have added to StreetLearn fully support both Touchdown tasks and can be used effectively for further research and comparison.</description><identifier>DOI: 10.48550/arxiv.2001.03671</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2020-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2001.03671$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2001.03671$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Mehta, Harsh</creatorcontrib><creatorcontrib>Artzi, Yoav</creatorcontrib><creatorcontrib>Baldridge, Jason</creatorcontrib><creatorcontrib>Ie, Eugene</creatorcontrib><creatorcontrib>Mirowski, Piotr</creatorcontrib><title>Retouchdown: Adding Touchdown to StreetLearn as a Shareable Resource for Language Grounding Tasks in Street View</title><description>The Touchdown dataset (Chen et al., 2019) provides instructions by human annotators for navigation through New York City streets and for resolving spatial descriptions at a given location. To enable the wider research community to work effectively with the Touchdown tasks, we are publicly releasing the 29k raw Street View panoramas needed for Touchdown. We follow the process used for the StreetLearn data release (Mirowski et al., 2019) to check panoramas for personally identifiable information and blur them as necessary. These have been added to the StreetLearn dataset and can be obtained via the same process as used previously for StreetLearn. We also provide a reference implementation for both of the Touchdown tasks: vision and language navigation (VLN) and spatial description resolution (SDR). We compare our model results to those given in Chen et al. (2019) and show that the panoramas we have added to StreetLearn fully support both Touchdown tasks and can be used effectively for further research and comparison.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo1j8FOg0AURWfjwlQ_wJXvB6hvZpgB3DWNVhOSJi1xSx7DgxIr0wxg9e_Vtq5uchcn5whxJ3Eep8bgA4Wv7nOuEOUctU3ktThsePST29X-2D_Coq67voXi_4HRw3YMzGPOFHqgAQi2OwpM1Z5hw4OfgmNofICc-nailmEV_NSfOTS8D9D1Fwa8dXy8EVcN7Qe-vexMFM9PxfIlyter1-Uij-jXK2o0xdZyzUmmM4pRqtRZXTO6KkG0mU1jZZBU06QOnakNSqN05bQyGpWReibuz9hTcnkI3QeF7_IvvTyl6x9rM1Pc</recordid><startdate>20200110</startdate><enddate>20200110</enddate><creator>Mehta, Harsh</creator><creator>Artzi, Yoav</creator><creator>Baldridge, Jason</creator><creator>Ie, Eugene</creator><creator>Mirowski, Piotr</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200110</creationdate><title>Retouchdown: Adding Touchdown to StreetLearn as a Shareable Resource for Language Grounding Tasks in Street View</title><author>Mehta, Harsh ; Artzi, Yoav ; Baldridge, Jason ; Ie, Eugene ; Mirowski, Piotr</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-f3a466ede7939a40128c63de0cb70069684250a2ff8c0c5d501523bc325302513</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Mehta, Harsh</creatorcontrib><creatorcontrib>Artzi, Yoav</creatorcontrib><creatorcontrib>Baldridge, Jason</creatorcontrib><creatorcontrib>Ie, Eugene</creatorcontrib><creatorcontrib>Mirowski, Piotr</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Mehta, Harsh</au><au>Artzi, Yoav</au><au>Baldridge, Jason</au><au>Ie, Eugene</au><au>Mirowski, Piotr</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Retouchdown: Adding Touchdown to StreetLearn as a Shareable Resource for Language Grounding Tasks in Street View</atitle><date>2020-01-10</date><risdate>2020</risdate><abstract>The Touchdown dataset (Chen et al., 2019) provides instructions by human annotators for navigation through New York City streets and for resolving spatial descriptions at a given location. To enable the wider research community to work effectively with the Touchdown tasks, we are publicly releasing the 29k raw Street View panoramas needed for Touchdown. We follow the process used for the StreetLearn data release (Mirowski et al., 2019) to check panoramas for personally identifiable information and blur them as necessary. These have been added to the StreetLearn dataset and can be obtained via the same process as used previously for StreetLearn. We also provide a reference implementation for both of the Touchdown tasks: vision and language navigation (VLN) and spatial description resolution (SDR). We compare our model results to those given in Chen et al. (2019) and show that the panoramas we have added to StreetLearn fully support both Touchdown tasks and can be used effectively for further research and comparison.</abstract><doi>10.48550/arxiv.2001.03671</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2001.03671
ispartof
issn
language eng
recordid cdi_arxiv_primary_2001_03671
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title Retouchdown: Adding Touchdown to StreetLearn as a Shareable Resource for Language Grounding Tasks in Street View
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T21%3A07%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Retouchdown:%20Adding%20Touchdown%20to%20StreetLearn%20as%20a%20Shareable%20Resource%20for%20Language%20Grounding%20Tasks%20in%20Street%20View&rft.au=Mehta,%20Harsh&rft.date=2020-01-10&rft_id=info:doi/10.48550/arxiv.2001.03671&rft_dat=%3Carxiv_GOX%3E2001_03671%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true