MESA: Cooperative Meta-Exploration in Multi-Agent Learning through Exploiting State-Action Space Structure
Multi-agent reinforcement learning (MARL) algorithms often struggle to find strategies close to Pareto optimal Nash Equilibrium, owing largely to the lack of efficient exploration. The problem is exacerbated in sparse-reward settings, caused by the larger variance exhibited in policy learning. This...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multi-agent reinforcement learning (MARL) algorithms often struggle to find
strategies close to Pareto optimal Nash Equilibrium, owing largely to the lack
of efficient exploration. The problem is exacerbated in sparse-reward settings,
caused by the larger variance exhibited in policy learning. This paper
introduces MESA, a novel meta-exploration method for cooperative multi-agent
learning. It learns to explore by first identifying the agents' high-rewarding
joint state-action subspace from training tasks and then learning a set of
diverse exploration policies to "cover" the subspace. These trained exploration
policies can be integrated with any off-policy MARL algorithm for test-time
tasks. We first showcase MESA's advantage in a multi-step matrix game.
Furthermore, experiments show that with learned exploration policies, MESA
achieves significantly better performance in sparse-reward tasks in several
multi-agent particle environments and multi-agent MuJoCo environments, and
exhibits the ability to generalize to more challenging tasks at test time. |
---|---|
DOI: | 10.48550/arxiv.2405.00902 |