Maximum Entropy Heterogeneous-Agent Reinforcement Learning
Multi-agent reinforcement learning (MARL) has been shown effective for cooperative games in recent years. However, existing state-of-the-art methods face challenges related to sample complexity, training instability, and the risk of converging to a suboptimal Nash Equilibrium. In this paper, we prop...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multi-agent reinforcement learning (MARL) has been shown effective for
cooperative games in recent years. However, existing state-of-the-art methods
face challenges related to sample complexity, training instability, and the
risk of converging to a suboptimal Nash Equilibrium. In this paper, we propose
a unified framework for learning \emph{stochastic} policies to resolve these
issues. We embed cooperative MARL problems into probabilistic graphical models,
from which we derive the maximum entropy (MaxEnt) objective for MARL. Based on
the MaxEnt framework, we propose Heterogeneous-Agent Soft Actor-Critic (HASAC)
algorithm. Theoretically, we prove the monotonic improvement and convergence to
quantal response equilibrium (QRE) properties of HASAC. Furthermore, we
generalize a unified template for MaxEnt algorithmic design named Maximum
Entropy Heterogeneous-Agent Mirror Learning (MEHAML), which provides any
induced method with the same guarantees as HASAC. We evaluate HASAC on six
benchmarks: Bi-DexHands, Multi-Agent MuJoCo, StarCraft Multi-Agent Challenge,
Google Research Football, Multi-Agent Particle Environment, and Light Aircraft
Game. Results show that HASAC consistently outperforms strong baselines,
exhibiting better sample efficiency, robustness, and sufficient exploration. |
---|---|
DOI: | 10.48550/arxiv.2306.10715 |