Solving Bernoulli Rank-One Bandits with Unimodal Thompson Sampling
Stochastic Rank-One Bandits (Katarya et al, (2017a,b)) are a simple framework for regret minimization problems over rank-one matrices of arms. The initially proposed algorithms are proved to have logarithmic regret, but do not match the existing lower bound for this problem. We close this gap by fir...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Stochastic Rank-One Bandits (Katarya et al, (2017a,b)) are a simple framework
for regret minimization problems over rank-one matrices of arms. The initially
proposed algorithms are proved to have logarithmic regret, but do not match the
existing lower bound for this problem. We close this gap by first proving that
rank-one bandits are a particular instance of unimodal bandits, and then
providing a new analysis of Unimodal Thompson Sampling (UTS), initially
proposed by Paladino et al (2017). We prove an asymptotically optimal regret
bound on the frequentist regret of UTS and we support our claims with
simulations showing the significant improvement of our method compared to the
state-of-the-art. |
---|---|
DOI: | 10.48550/arxiv.1912.03074 |