Efficient Learning in Large-Scale Combinatorial Semi-Bandits
A stochastic combinatorial semi-bandit is an online learning problem where at each step a learning agent chooses a subset of ground items subject to combinatorial constraints, and then observes stochastic weights of these items and receives their sum as a payoff. In this paper, we consider efficient...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A stochastic combinatorial semi-bandit is an online learning problem where at
each step a learning agent chooses a subset of ground items subject to
combinatorial constraints, and then observes stochastic weights of these items
and receives their sum as a payoff. In this paper, we consider efficient
learning in large-scale combinatorial semi-bandits with linear generalization,
and as a solution, propose two learning algorithms called Combinatorial Linear
Thompson Sampling (CombLinTS) and Combinatorial Linear UCB (CombLinUCB). Both
algorithms are computationally efficient as long as the offline version of the
combinatorial problem can be solved efficiently. We establish that CombLinTS
and CombLinUCB are also provably statistically efficient under reasonable
assumptions, by developing regret bounds that are independent of the problem
scale (number of items) and sublinear in time. We also evaluate CombLinTS on a
variety of problems with thousands of items. Our experiment results demonstrate
that CombLinTS is scalable, robust to the choice of algorithm parameters, and
significantly outperforms the best of our baselines. |
---|---|
DOI: | 10.48550/arxiv.1406.7443 |