On the Evaluation of Vision-and-Language Navigation Instructions
Vision-and-Language Navigation wayfinding agents can be enhanced by exploiting automatically generated navigation instructions. However, existing instruction generators have not been comprehensively evaluated, and the automatic evaluation metrics used to develop them have not been validated. Using h...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Vision-and-Language Navigation wayfinding agents can be enhanced by
exploiting automatically generated navigation instructions. However, existing
instruction generators have not been comprehensively evaluated, and the
automatic evaluation metrics used to develop them have not been validated.
Using human wayfinders, we show that these generators perform on par with or
only slightly better than a template-based generator and far worse than human
instructors. Furthermore, we discover that BLEU, ROUGE, METEOR and CIDEr are
ineffective for evaluating grounded navigation instructions. To improve
instruction evaluation, we propose an instruction-trajectory compatibility
model that operates without reference instructions. Our model shows the highest
correlation with human wayfinding outcomes when scoring individual
instructions. For ranking instruction generation systems, if reference
instructions are available we recommend using SPICE. |
---|---|
DOI: | 10.48550/arxiv.2101.10504 |