Productization Challenges of Contextual Multi-Armed Bandits
Contextual Multi-Armed Bandits is a well-known and accepted online optimization algorithm, that is used in many Web experiences to tailor content or presentation to users' traffic. Much has been published on theoretical guarantees (e.g. regret bounds) of proposed algorithmic variants, but relat...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Contextual Multi-Armed Bandits is a well-known and accepted online
optimization algorithm, that is used in many Web experiences to tailor content
or presentation to users' traffic. Much has been published on theoretical
guarantees (e.g. regret bounds) of proposed algorithmic variants, but
relatively little attention has been devoted to the challenges encountered
while productizing contextual bandits schemes in large scale settings. This
work enumerates several productization challenges we encountered while
leveraging contextual bandits for two concrete use cases at scale. We discuss
how to (1) determine the context (engineer the features) that model the bandit
arms; (2) sanity check the health of the optimization process; (3) evaluate the
process in an offline manner; (4) add potential actions (arms) on the fly to a
running process; (5) subject the decision process to constraints; and (6)
iteratively improve the online learning algorithm. For each such challenge, we
explain the issue, provide our approach, and relate to prior art where
applicable. |
---|---|
DOI: | 10.48550/arxiv.1907.04884 |