Multiagent, Multitarget Path Planning in Markov Decision Processes
Missions for autonomous systems often require agents to visit multiple targets in complex operating conditions. This work considers the problem of visiting a set of targets in minimum time by a team of noncommunicating agents in a Markov decision process (MDP). The single-agent problem is at least N...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on automatic control 2023-12, Vol.68 (12), p.7560-7574 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Missions for autonomous systems often require agents to visit multiple targets in complex operating conditions. This work considers the problem of visiting a set of targets in minimum time by a team of noncommunicating agents in a Markov decision process (MDP). The single-agent problem is at least NP-complete by reducing it to a Hamiltonian path problem. We first discuss an optimal algorithm based on Bellman's optimality equation that is exponential in the number of target states. Then, we tradeoff optimality for time complexity by presenting a suboptimal algorithm that is polynomial at each time step. We prove that the proposed algorithm generates optimal policies for certain classes of MDPs. Extending our procedure to the multiagent case, we propose a target partitioning algorithm that approximately minimizes the expected time to visit the targets. We prove that our algorithm generates optimal partitions for clustered target scenarios. We present the performance of our algorithms on random MDPs and gridworld environments inspired by ocean dynamics. We show that our algorithms are much faster than the optimal procedure and more optimal than the currently available heuristic. |
---|---|
ISSN: | 0018-9286 1558-2523 |
DOI: | 10.1109/TAC.2023.3286807 |