Vision-Dialog Navigation by Exploring Cross-modal Memory
Vision-dialog navigation posed as a new holy-grail task in vision-language disciplinary targets at learning an agent endowed with the capability of constant conversation for help with natural language and navigating according to human responses. Besides the common challenges faced in visual language...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Vision-dialog navigation posed as a new holy-grail task in vision-language
disciplinary targets at learning an agent endowed with the capability of
constant conversation for help with natural language and navigating according
to human responses. Besides the common challenges faced in visual language
navigation, vision-dialog navigation also requires to handle well with the
language intentions of a series of questions about the temporal context from
dialogue history and co-reasoning both dialogs and visual scenes. In this
paper, we propose the Cross-modal Memory Network (CMN) for remembering and
understanding the rich information relevant to historical navigation actions.
Our CMN consists of two memory modules, the language memory module (L-mem) and
the visual memory module (V-mem). Specifically, L-mem learns latent
relationships between the current language interaction and a dialog history by
employing a multi-head attention mechanism. V-mem learns to associate the
current visual views and the cross-modal memory about the previous navigation
actions. The cross-modal memory is generated via a vision-to-language attention
and a language-to-vision attention. Benefiting from the collaborative learning
of the L-mem and the V-mem, our CMN is able to explore the memory about the
decision making of historical navigation actions which is for the current step.
Experiments on the CVDN dataset show that our CMN outperforms the previous
state-of-the-art model by a significant margin on both seen and unseen
environments. |
---|---|
DOI: | 10.48550/arxiv.2003.06745 |