Diffusion-Reinforcement Learning Hierarchical Motion Planning in Adversarial Multi-agent Games
Reinforcement Learning- (RL-)based motion planning has recently shown the potential to outperform traditional approaches from autonomous navigation to robot manipulation. In this work, we focus on a motion planning task for an evasive target in a partially observable multi-agent adversarial pursuit-...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reinforcement Learning- (RL-)based motion planning has recently shown the
potential to outperform traditional approaches from autonomous navigation to
robot manipulation. In this work, we focus on a motion planning task for an
evasive target in a partially observable multi-agent adversarial
pursuit-evasion games (PEG). These pursuit-evasion problems are relevant to
various applications, such as search and rescue operations and surveillance
robots, where robots must effectively plan their actions to gather intelligence
or accomplish mission tasks while avoiding detection or capture themselves. We
propose a hierarchical architecture that integrates a high-level diffusion
model to plan global paths responsive to environment data while a low-level RL
algorithm reasons about evasive versus global path-following behavior. Our
approach outperforms baselines by 51.2% by leveraging the diffusion model to
guide the RL algorithm for more efficient exploration and improves the
explanability and predictability. |
---|---|
DOI: | 10.48550/arxiv.2403.10794 |