Multi-Agent Stochastic Bandits Robust to Adversarial Corruptions
We study the problem of multi-agent multi-armed bandits with adversarial corruption in a heterogeneous setting, where each agent accesses a subset of arms. The adversary can corrupt the reward observations for all agents. Agents share these corrupted rewards with each other, and the objective is to...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study the problem of multi-agent multi-armed bandits with adversarial
corruption in a heterogeneous setting, where each agent accesses a subset of
arms. The adversary can corrupt the reward observations for all agents. Agents
share these corrupted rewards with each other, and the objective is to maximize
the cumulative total reward of all agents (and not be misled by the adversary).
We propose a multi-agent cooperative learning algorithm that is robust to
adversarial corruptions. For this newly devised algorithm, we demonstrate that
an adversary with an unknown corruption budget $C$ only incurs an additive
$O((L / L_{\min}) C)$ term to the standard regret of the model in
non-corruption settings, where $L$ is the total number of agents, and
$L_{\min}$ is the minimum number of agents with mutual access to an arm. As a
side-product, our algorithm also improves the state-of-the-art regret bounds
when reducing to both the single-agent and homogeneous multi-agent scenarios,
tightening multiplicative $K$ (the number of arms) and $L$ (the number of
agents) factors, respectively. |
---|---|
DOI: | 10.48550/arxiv.2411.08167 |