Learning in Congestion Games with Bandit Feedback
In this paper, we investigate Nash-regret minimization in congestion games, a class of games with benign theoretical structure and broad real-world applications. We first propose a centralized algorithm based on the optimism in the face of uncertainty principle for congestion games with (semi-)bandi...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we investigate Nash-regret minimization in congestion games, a
class of games with benign theoretical structure and broad real-world
applications. We first propose a centralized algorithm based on the optimism in
the face of uncertainty principle for congestion games with (semi-)bandit
feedback, and obtain finite-sample guarantees. Then we propose a decentralized
algorithm via a novel combination of the Frank-Wolfe method and G-optimal
design. By exploiting the structure of the congestion game, we show the sample
complexity of both algorithms depends only polynomially on the number of
players and the number of facilities, but not the size of the action set, which
can be exponentially large in terms of the number of facilities. We further
define a new problem class, Markov congestion games, which allows us to model
the non-stationarity in congestion games. We propose a centralized algorithm
for Markov congestion games, whose sample complexity again has only polynomial
dependence on all relevant problem parameters, but not the size of the action
set. |
---|---|
DOI: | 10.48550/arxiv.2206.01880 |