IVLMap: Instance-Aware Visual Language Grounding for Consumer Robot Navigation
Vision-and-Language Navigation (VLN) is a challenging task that requires a robot to navigate in photo-realistic environments with human natural language promptings. Recent studies aim to handle this task by constructing the semantic spatial map representation of the environment, and then leveraging...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Vision-and-Language Navigation (VLN) is a challenging task that requires a
robot to navigate in photo-realistic environments with human natural language
promptings. Recent studies aim to handle this task by constructing the semantic
spatial map representation of the environment, and then leveraging the strong
ability of reasoning in large language models for generalizing code for guiding
the robot navigation. However, these methods face limitations in instance-level
and attribute-level navigation tasks as they cannot distinguish different
instances of the same object. To address this challenge, we propose a new
method, namely, Instance-aware Visual Language Map (IVLMap), to empower the
robot with instance-level and attribute-level semantic mapping, where it is
autonomously constructed by fusing the RGBD video data collected from the robot
agent with special-designed natural language map indexing in the bird's-in-eye
view. Such indexing is instance-level and attribute-level. In particular, when
integrated with a large language model, IVLMap demonstrates the capability to
i) transform natural language into navigation targets with instance and
attribute information, enabling precise localization, and ii) accomplish
zero-shot end-to-end navigation tasks based on natural language commands.
Extensive navigation experiments are conducted. Simulation results illustrate
that our method can achieve an average improvement of 14.4\% in navigation
accuracy. Code and demo are released at https://ivlmap.github.io/. |
---|---|
DOI: | 10.48550/arxiv.2403.19336 |