Scalable Multi-Agent Reinforcement Learning through Intelligent Information Aggregation
We consider the problem of multi-agent navigation and collision avoidance when observations are limited to the local neighborhood of each agent. We propose InforMARL, a novel architecture for multi-agent reinforcement learning (MARL) which uses local information intelligently to compute paths for al...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We consider the problem of multi-agent navigation and collision avoidance
when observations are limited to the local neighborhood of each agent. We
propose InforMARL, a novel architecture for multi-agent reinforcement learning
(MARL) which uses local information intelligently to compute paths for all the
agents in a decentralized manner. Specifically, InforMARL aggregates
information about the local neighborhood of agents for both the actor and the
critic using a graph neural network and can be used in conjunction with any
standard MARL algorithm. We show that (1) in training, InforMARL has better
sample efficiency and performance than baseline approaches, despite using less
information, and (2) in testing, it scales well to environments with arbitrary
numbers of agents and obstacles. We illustrate these results using four task
environments, including one with predetermined goals for each agent, and one in
which the agents collectively try to cover all goals. Code available at
https://github.com/nsidn98/InforMARL. |
---|---|
DOI: | 10.48550/arxiv.2211.02127 |