Active Visual Localization for Multi-Agent Collaboration: A Data-Driven Approach
Rather than having each newly deployed robot create its own map of its surroundings, the growing availability of SLAM-enabled devices provides the option of simply localizing in a map of another robot or device. In cases such as multi-robot or human-robot collaboration, localizing all agents in the...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Rather than having each newly deployed robot create its own map of its
surroundings, the growing availability of SLAM-enabled devices provides the
option of simply localizing in a map of another robot or device. In cases such
as multi-robot or human-robot collaboration, localizing all agents in the same
map is even necessary. However, localizing e.g. a ground robot in the map of a
drone or head-mounted MR headset presents unique challenges due to viewpoint
changes. This work investigates how active visual localization can be used to
overcome such challenges of viewpoint changes. Specifically, we focus on the
problem of selecting the optimal viewpoint at a given location. We compare
existing approaches in the literature with additional proposed baselines and
propose a novel data-driven approach. The result demonstrates the superior
performance of the data-driven approach when compared to existing methods, both
in controlled simulation experiments and real-world deployment. |
---|---|
DOI: | 10.48550/arxiv.2310.02650 |