Evaluating Explanation Methods for Vision-and-Language Navigation
The ability to navigate robots with natural language instructions in an unknown environment is a crucial step for achieving embodied artificial intelligence (AI). With the improving performance of deep neural models proposed in the field of vision-and-language navigation (VLN), it is equally interes...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The ability to navigate robots with natural language instructions in an
unknown environment is a crucial step for achieving embodied artificial
intelligence (AI). With the improving performance of deep neural models
proposed in the field of vision-and-language navigation (VLN), it is equally
interesting to know what information the models utilize for their
decision-making in the navigation tasks. To understand the inner workings of
deep neural models, various explanation methods have been developed for
promoting explainable AI (XAI). But they are mostly applied to deep neural
models for image or text classification tasks and little work has been done in
explaining deep neural models for VLN tasks. In this paper, we address these
problems by building quantitative benchmarks to evaluate explanation methods
for VLN models in terms of faithfulness. We propose a new erasure-based
evaluation pipeline to measure the step-wise textual explanation in the
sequential decision-making setting. We evaluate several explanation methods for
two representative VLN models on two popular VLN datasets and reveal valuable
findings through our experiments. |
---|---|
DOI: | 10.48550/arxiv.2310.06654 |