Benchmarking Robustness and Generalization in Multi-Agent Systems: A Case Study on Neural MMO
We present the results of the second Neural MMO challenge, hosted at IJCAI 2022, which received 1600+ submissions. This competition targets robustness and generalization in multi-agent systems: participants train teams of agents to complete a multi-task objective against opponents not seen during tr...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present the results of the second Neural MMO challenge, hosted at IJCAI
2022, which received 1600+ submissions. This competition targets robustness and
generalization in multi-agent systems: participants train teams of agents to
complete a multi-task objective against opponents not seen during training. The
competition combines relatively complex environment design with large numbers
of agents in the environment. The top submissions demonstrate strong success on
this task using mostly standard reinforcement learning (RL) methods combined
with domain-specific engineering. We summarize the competition design and
results and suggest that, as an academic community, competitions may be a
powerful approach to solving hard problems and establishing a solid benchmark
for algorithms. We will open-source our benchmark including the environment
wrapper, baselines, a visualization tool, and selected policies for further
research. |
---|---|
DOI: | 10.48550/arxiv.2308.15802 |