Towards Costless Model Selection in Contextual Bandits: A Bias-Variance Perspective
Model selection in supervised learning provides costless guarantees as if the model that best balances bias and variance was known a priori. We study the feasibility of similar guarantees for cumulative regret minimization in the stochastic contextual bandit setting. Recent work [Marinov and Zimmert...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Model selection in supervised learning provides costless guarantees as if the
model that best balances bias and variance was known a priori. We study the
feasibility of similar guarantees for cumulative regret minimization in the
stochastic contextual bandit setting. Recent work [Marinov and Zimmert, 2021]
identifies instances where no algorithm can guarantee costless regret bounds.
Nevertheless, we identify benign conditions where costless model selection is
feasible: gradually increasing class complexity, and diminishing marginal
returns for best-in-class policy value with increasing class complexity. Our
algorithm is based on a novel misspecification test, and our analysis
demonstrates the benefits of using model selection for reward estimation.
Unlike prior work on model selection in contextual bandits, our algorithm
carefully adapts to the evolving bias-variance trade-off as more data is
collected. In particular, our algorithm and analysis go beyond adapting to the
complexity of the simplest realizable class and instead adapt to the complexity
of the simplest class whose estimation variance dominates the bias. For short
horizons, this provides improved regret guarantees that depend on the
complexity of simpler classes. |
---|---|
DOI: | 10.48550/arxiv.2106.06483 |