ConcaveQ: Non-Monotonic Value Function Factorization via Concave Representations in Deep Multi-Agent Reinforcement Learning
AAAI 2024 Value function factorization has achieved great success in multi-agent reinforcement learning by optimizing joint action-value functions through the maximization of factorized per-agent utilities. To ensure Individual-Global-Maximum property, existing works often focus on value factorizati...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | AAAI 2024 Value function factorization has achieved great success in multi-agent
reinforcement learning by optimizing joint action-value functions through the
maximization of factorized per-agent utilities. To ensure
Individual-Global-Maximum property, existing works often focus on value
factorization using monotonic functions, which are known to result in
restricted representation expressiveness. In this paper, we analyze the
limitations of monotonic factorization and present ConcaveQ, a novel
non-monotonic value function factorization approach that goes beyond monotonic
mixing functions and employs neural network representations of concave mixing
functions. Leveraging the concave property in factorization, an iterative
action selection scheme is developed to obtain optimal joint actions during
training. It is used to update agents' local policy networks, enabling fully
decentralized execution. The effectiveness of the proposed ConcaveQ is
validated across scenarios involving multi-agent predator-prey environment and
StarCraft II micromanagement tasks. Empirical results exhibit significant
improvement of ConcaveQ over state-of-the-art multi-agent reinforcement
learning approaches. |
---|---|
DOI: | 10.48550/arxiv.2312.15555 |