Randomized Exploration for Non-Stationary Stochastic Linear Bandits
We investigate two perturbation approaches to overcome conservatism that optimism based algorithms chronically suffer from in practice. The first approach replaces optimism with a simple randomization when using confidence sets. The second one adds random perturbations to its current estimate before...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We investigate two perturbation approaches to overcome conservatism that
optimism based algorithms chronically suffer from in practice. The first
approach replaces optimism with a simple randomization when using confidence
sets. The second one adds random perturbations to its current estimate before
maximizing the expected reward. For non-stationary linear bandits, where each
action is associated with a $d$-dimensional feature and the unknown parameter
is time-varying with total variation $B_T$, we propose two randomized
algorithms, Discounted Randomized LinUCB (D-RandLinUCB) and Discounted Linear
Thompson Sampling (D-LinTS) via the two perturbation approaches. We highlight
the statistical optimality versus computational efficiency trade-off between
them in that the former asymptotically achieves the optimal dynamic regret
$\tilde{O}(d^{7/8} B_T^{1/4}T^{3/4})$, but the latter is oracle-efficient with
an extra logarithmic factor in the number of arms compared to minimax-optimal
dynamic regret. In a simulation study, both algorithms show outstanding
performance in tackling conservatism issue that Discounted LinUCB struggles
with. |
---|---|
DOI: | 10.48550/arxiv.1912.05695 |