Offline Multi-Agent Reinforcement Learning via In-Sample Sequential Policy Optimization
Offline Multi-Agent Reinforcement Learning (MARL) is an emerging field that aims to learn optimal multi-agent policies from pre-collected datasets. Compared to single-agent case, multi-agent setting involves a large joint state-action space and coupled behaviors of multiple agents, which bring extra...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Offline Multi-Agent Reinforcement Learning (MARL) is an emerging field that
aims to learn optimal multi-agent policies from pre-collected datasets.
Compared to single-agent case, multi-agent setting involves a large joint
state-action space and coupled behaviors of multiple agents, which bring extra
complexity to offline policy optimization. In this work, we revisit the
existing offline MARL methods and show that in certain scenarios they can be
problematic, leading to uncoordinated behaviors and out-of-distribution (OOD)
joint actions. To address these issues, we propose a new offline MARL
algorithm, named In-Sample Sequential Policy Optimization (InSPO). InSPO
sequentially updates each agent's policy in an in-sample manner, which not only
avoids selecting OOD joint actions but also carefully considers teammates'
updated policies to enhance coordination. Additionally, by thoroughly exploring
low-probability actions in the behavior policy, InSPO can well address the
issue of premature convergence to sub-optimal solutions. Theoretically, we
prove InSPO guarantees monotonic policy improvement and converges to quantal
response equilibrium (QRE). Experimental results demonstrate the effectiveness
of our method compared to current state-of-the-art offline MARL methods. |
---|---|
DOI: | 10.48550/arxiv.2412.07639 |