Online learning with Corrupted context: Corrupted Contextual Bandits
We consider a novel variant of the contextual bandit problem (i.e., the multi-armed bandit with side-information, or context, available to a decision-maker) where the context used at each decision may be corrupted ("useless context"). This new problem is motivated by certain on-line settin...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We consider a novel variant of the contextual bandit problem (i.e., the
multi-armed bandit with side-information, or context, available to a
decision-maker) where the context used at each decision may be corrupted
("useless context"). This new problem is motivated by certain on-line settings
including clinical trial and ad recommendation applications. In order to
address the corrupted-context setting,we propose to combine the standard
contextual bandit approach with a classical multi-armed bandit mechanism.
Unlike standard contextual bandit methods, we are able to learn from all
iteration, even those with corrupted context, by improving the computing of the
expectation for each arm. Promising empirical results are obtained on several
real-life datasets. |
---|---|
DOI: | 10.48550/arxiv.2006.15194 |