SAME: Learning Generic Language-Guided Visual Navigation with State-Adaptive Mixture of Experts
The academic field of learning instruction-guided visual navigation can be generally categorized into high-level category-specific search and low-level language-guided navigation, depending on the granularity of language instruction, in which the former emphasizes the exploration process, while the...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The academic field of learning instruction-guided visual navigation can be
generally categorized into high-level category-specific search and low-level
language-guided navigation, depending on the granularity of language
instruction, in which the former emphasizes the exploration process, while the
latter concentrates on following detailed textual commands. Despite the
differing focuses of these tasks, the underlying requirements of interpreting
instructions, comprehending the surroundings, and inferring action decisions
remain consistent. This paper consolidates diverse navigation tasks into a
unified and generic framework -- we investigate the core difficulties of
sharing general knowledge and exploiting task-specific capabilities in learning
navigation and propose a novel State-Adaptive Mixture of Experts (SAME) model
that effectively enables an agent to infer decisions based on
different-granularity language and dynamic observations. Powered by SAME, we
present a versatile agent capable of addressing seven navigation tasks
simultaneously that outperforms or achieves highly comparable performance to
task-specific agents. |
---|---|
DOI: | 10.48550/arxiv.2412.05552 |