Beyond Rewards: a Hierarchical Perspective on Offline Multiagent Behavioral Analysis
Each year, expert-level performance is attained in increasingly-complex multiagent domains, where notable examples include Go, Poker, and StarCraft II. This rapid progression is accompanied by a commensurate need to better understand how such agents attain this performance, to enable their safe depl...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Each year, expert-level performance is attained in increasingly-complex
multiagent domains, where notable examples include Go, Poker, and StarCraft II.
This rapid progression is accompanied by a commensurate need to better
understand how such agents attain this performance, to enable their safe
deployment, identify limitations, and reveal potential means of improving them.
In this paper we take a step back from performance-focused multiagent learning,
and instead turn our attention towards agent behavior analysis. We introduce a
model-agnostic method for discovery of behavior clusters in multiagent domains,
using variational inference to learn a hierarchy of behaviors at the joint and
local agent levels. Our framework makes no assumption about agents' underlying
learning algorithms, does not require access to their latent states or
policies, and is trained using only offline observational data. We illustrate
the effectiveness of our method for enabling the coupled understanding of
behaviors at the joint and local agent level, detection of behavior
changepoints throughout training, discovery of core behavioral concepts,
demonstrate the approach's scalability to a high-dimensional multiagent MuJoCo
control domain, and also illustrate that the approach can disentangle
previously-trained policies in OpenAI's hide-and-seek domain. |
---|---|
DOI: | 10.48550/arxiv.2206.09046 |