Efficient Multiagent Planning via Shared Action Suggestions
Decentralized partially observable Markov decision processes with communication (Dec-POMDP-Com) provide a framework for multiagent decision making under uncertainty, but the NEXP-complete complexity renders solutions intractable in general. While sharing actions and observations can reduce the compl...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Decentralized partially observable Markov decision processes with
communication (Dec-POMDP-Com) provide a framework for multiagent decision
making under uncertainty, but the NEXP-complete complexity renders solutions
intractable in general. While sharing actions and observations can reduce the
complexity to PSPACE-complete, we propose an approach that bridges POMDPs and
Dec-POMDPs by communicating only suggested joint actions, eliminating the need
to share observations while maintaining performance comparable to fully
centralized planning and execution. Our algorithm estimates joint beliefs using
shared actions to prune infeasible beliefs. Each agent maintains possible
belief sets for other agents, pruning them based on suggested actions to form
an estimated joint belief usable with any centralized policy. This approach
requires solving a POMDP for each agent, reducing computational complexity
while preserving performance. We demonstrate its effectiveness on several
Dec-POMDP benchmarks showing performance comparable to centralized methods when
shared actions enable effective belief pruning. This action-based communication
framework offers a natural avenue for integrating human-agent cooperation,
opening new directions for scalable multiagent planning under uncertainty, with
applications in both autonomous systems and human-agent teams. |
---|---|
DOI: | 10.48550/arxiv.2412.11430 |