MUI-TARE: Cooperative Multi-Agent Exploration With Unknown Initial Position

Multi-agent exploration of a bounded 3D environment with the unknown initial poses of agents is a challenging problem. It requires both quickly exploring the environments and robustly merging the sub-maps built by the agents. Most existing exploration strategies directly merge two sub-maps built by...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE robotics and automation letters 2023-07, Vol.8 (7), p.4299-4306
Hauptverfasser: Yan, Jingtian, Lin, Xingqiao, Ren, Zhongqiang, Zhao, Shiqi, Yu, Jieqiong, Cao, Chao, Yin, Peng, Zhang, Ji, Scherer, Sebastian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Multi-agent exploration of a bounded 3D environment with the unknown initial poses of agents is a challenging problem. It requires both quickly exploring the environments and robustly merging the sub-maps built by the agents. Most existing exploration strategies directly merge two sub-maps built by different agents when a single frame observation is matched, which can lead to incorrect merging due to the false-positive detection of the overlap and is thus not robust. In the meanwhile, some recent place recognition methods use sequence matching for robust data association. However, naively applying these sequence matching methods to multi-agent exploration may require one agent to repeat a large amount of another agent's history trajectory so that a sequence of matched observation can be established, which reduces the overall exploration time efficiency. To intelligently balance the robustness of sub-map merging and exploration efficiency, we develop a new approach for lidar-based multi-agent exploration, which can direct one agent to repeat another agent's trajectory in an adaptive manner based on the quality indicator of the sub-map merging process. Additionally, our approach extends the recent single-agent hierarchical exploration strategy to multiple agents in a cooperative manner for agents whose sub-maps are merged, to improve exploration efficiency. Our experiments show that our approach is up to 50% more efficient than the baselines while merging sub-maps robustly.
ISSN:2377-3766
2377-3766
DOI:10.1109/LRA.2023.3281262