Linear Convergence of Independent Natural Policy Gradient in Games with Entropy Regularization
This work focuses on the entropy-regularized independent natural policy gradient (NPG) algorithm in multi-agent reinforcement learning. In this work, agents are assumed to have access to an oracle with exact policy evaluation and seek to maximize their respective independent rewards. Each individual...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This work focuses on the entropy-regularized independent natural policy
gradient (NPG) algorithm in multi-agent reinforcement learning. In this work,
agents are assumed to have access to an oracle with exact policy evaluation and
seek to maximize their respective independent rewards. Each individual's reward
is assumed to depend on the actions of all the agents in the multi-agent
system, leading to a game between agents. We assume all agents make decisions
under a policy with bounded rationality, which is enforced by the introduction
of entropy regularization. In practice, a smaller regularization implies the
agents are more rational and behave closer to Nash policies. On the other hand,
agents with larger regularization acts more randomly, which ensures more
exploration. We show that, under sufficient entropy regularization, the
dynamics of this system converge at a linear rate to the quantal response
equilibrium (QRE). Although regularization assumptions prevent the QRE from
approximating a Nash equilibrium, our findings apply to a wide range of games,
including cooperative, potential, and two-player matrix games. We also provide
extensive empirical results on multiple games (including Markov games) as a
verification of our theoretical analysis. |
---|---|
DOI: | 10.48550/arxiv.2405.02769 |