Multi-Agent Common Knowledge Reinforcement Learning
Cooperative multi-agent reinforcement learning often requires decentralised policies, which severely limit the agents' ability to coordinate their behaviour. In this paper, we show that common knowledge between agents allows for complex decentralised coordination. Common knowledge arises natura...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Cooperative multi-agent reinforcement learning often requires decentralised
policies, which severely limit the agents' ability to coordinate their
behaviour. In this paper, we show that common knowledge between agents allows
for complex decentralised coordination. Common knowledge arises naturally in a
large number of decentralised cooperative multi-agent tasks, for example, when
agents can reconstruct parts of each others' observations. Since agents an
independently agree on their common knowledge, they can execute complex
coordinated policies that condition on this knowledge in a fully decentralised
fashion. We propose multi-agent common knowledge reinforcement learning
(MACKRL), a novel stochastic actor-critic algorithm that learns a hierarchical
policy tree. Higher levels in the hierarchy coordinate groups of agents by
conditioning on their common knowledge, or delegate to lower levels with
smaller subgroups but potentially richer common knowledge. The entire policy
tree can be executed in a fully decentralised fashion. As the lowest policy
tree level consists of independent policies for each agent, MACKRL reduces to
independently learnt decentralised policies as a special case. We demonstrate
that our method can exploit common knowledge for superior performance on
complex decentralised coordination tasks, including a stochastic matrix game
and challenging problems in StarCraft II unit micromanagement. |
---|---|
DOI: | 10.48550/arxiv.1810.11702 |