Retouchdown: Adding Touchdown to StreetLearn as a Shareable Resource for Language Grounding Tasks in Street View
The Touchdown dataset (Chen et al., 2019) provides instructions by human annotators for navigation through New York City streets and for resolving spatial descriptions at a given location. To enable the wider research community to work effectively with the Touchdown tasks, we are publicly releasing...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The Touchdown dataset (Chen et al., 2019) provides instructions by human
annotators for navigation through New York City streets and for resolving
spatial descriptions at a given location. To enable the wider research
community to work effectively with the Touchdown tasks, we are publicly
releasing the 29k raw Street View panoramas needed for Touchdown. We follow the
process used for the StreetLearn data release (Mirowski et al., 2019) to check
panoramas for personally identifiable information and blur them as necessary.
These have been added to the StreetLearn dataset and can be obtained via the
same process as used previously for StreetLearn. We also provide a reference
implementation for both of the Touchdown tasks: vision and language navigation
(VLN) and spatial description resolution (SDR). We compare our model results to
those given in Chen et al. (2019) and show that the panoramas we have added to
StreetLearn fully support both Touchdown tasks and can be used effectively for
further research and comparison. |
---|---|
DOI: | 10.48550/arxiv.2001.03671 |