Optimal Cooperative Multiplayer Learning Bandits with Noisy Rewards and No Communication
We consider a cooperative multiplayer bandit learning problem where the players are only allowed to agree on a strategy beforehand, but cannot communicate during the learning process. In this problem, each player simultaneously selects an action. Based on the actions selected by all players, the tea...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We consider a cooperative multiplayer bandit learning problem where the
players are only allowed to agree on a strategy beforehand, but cannot
communicate during the learning process. In this problem, each player
simultaneously selects an action. Based on the actions selected by all players,
the team of players receives a reward. The actions of all the players are
commonly observed. However, each player receives a noisy version of the reward
which cannot be shared with other players. Since players receive potentially
different rewards, there is an asymmetry in the information used to select
their actions. In this paper, we provide an algorithm based on upper and lower
confidence bounds that the players can use to select their optimal actions
despite the asymmetry in the reward information. We show that this algorithm
can achieve logarithmic $O(\frac{\log T}{\Delta_{\bm{a}}})$ (gap-dependent)
regret as well as $O(\sqrt{T\log T})$ (gap-independent) regret. This is
asymptotically optimal in $T$. We also show that it performs empirically better
than the current state of the art algorithm for this environment. |
---|---|
DOI: | 10.48550/arxiv.2311.06210 |