On-Demand Communication for Asynchronous Multi-Agent Bandits
This paper studies a cooperative multi-agent multi-armed stochastic bandit problem where agents operate asynchronously -- agent pull times and rates are unknown, irregular, and heterogeneous -- and face the same instance of a K-armed bandit problem. Agents can share reward information to speed up th...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper studies a cooperative multi-agent multi-armed stochastic bandit
problem where agents operate asynchronously -- agent pull times and rates are
unknown, irregular, and heterogeneous -- and face the same instance of a
K-armed bandit problem. Agents can share reward information to speed up the
learning process at additional communication costs. We propose ODC, an
on-demand communication protocol that tailors the communication of each pair of
agents based on their empirical pull times. ODC is efficient when the pull
times of agents are highly heterogeneous, and its communication complexity
depends on the empirical pull times of agents. ODC is a generic protocol that
can be integrated into most cooperative bandit algorithms without degrading
their performance. We then incorporate ODC into the natural extensions of UCB
and AAE algorithms and propose two communication-efficient cooperative
algorithms. Our analysis shows that both algorithms are near-optimal in regret. |
---|---|
DOI: | 10.48550/arxiv.2302.07446 |