Stateful active facilitator: Coordination and Environmental Heterogeneity in Cooperative Multi-Agent Reinforcement Learning
In cooperative multi-agent reinforcement learning, a team of agents works together to achieve a common goal. Different environments or tasks may require varying degrees of coordination among agents in order to achieve the goal in an optimal way. The nature of coordination will depend on the properti...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In cooperative multi-agent reinforcement learning, a team of agents works
together to achieve a common goal. Different environments or tasks may require
varying degrees of coordination among agents in order to achieve the goal in an
optimal way. The nature of coordination will depend on the properties of the
environment -- its spatial layout, distribution of obstacles, dynamics, etc. We
term this variation of properties within an environment as heterogeneity.
Existing literature has not sufficiently addressed the fact that different
environments may have different levels of heterogeneity. We formalize the
notions of coordination level and heterogeneity level of an environment and
present HECOGrid, a suite of multi-agent RL environments that facilitates
empirical evaluation of different MARL approaches across different levels of
coordination and environmental heterogeneity by providing a quantitative
control over coordination and heterogeneity levels of the environment. Further,
we propose a Centralized Training Decentralized Execution learning approach
called Stateful Active Facilitator (SAF) that enables agents to work
efficiently in high-coordination and high-heterogeneity environments through a
differentiable and shared knowledge source used during training and dynamic
selection from a shared pool of policies. We evaluate SAF and compare its
performance against baselines IPPO and MAPPO on HECOGrid. Our results show that
SAF consistently outperforms the baselines across different tasks and different
heterogeneity and coordination levels. We release the code for HECOGrid as well
as all our experiments. |
---|---|
DOI: | 10.48550/arxiv.2210.03022 |