Non-stationary Bandits with Habituation and Recovery Dynamics and Knapsack Constraints

Multi-armed bandit models have proven to be useful in modeling many real world problems in the areas of control and sequential decision making with partial information. However, in many scenarios, such as those prevalent in healthcare and operations management, the decision maker's expected rew...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: He, Qinyang, Mintz, Yonatan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Multi-armed bandit models have proven to be useful in modeling many real world problems in the areas of control and sequential decision making with partial information. However, in many scenarios, such as those prevalent in healthcare and operations management, the decision maker's expected reward will decrease if an action is selected too frequently while it may recover if they abstain from selecting this action. This scenario is further complicated when choosing a particular action also expends a random amount of a limited resource where the distribution is also initially unknown to the decision maker. In this paper we study a class of models that address this setting that we call reducing or gaining unknown efficacy bandits with stochastic knapsack constraints (ROGUEwK). We propose a combination upper confidence bound (UCB) and lower confidence bound (LCB) approximation algorithm for optimizing this model. Our algorithm chooses which action to play at each time point by solving a linear program (LP) with the UCB for the average rewards and LCB for the average costs as inputs. We show that the regret of our algorithm is sub-linear as a function of time and total constraint budget when compared to a dynamic oracle. We validate the performance of our algorithm against existing state of the art non-stationary and knapsack bandit approaches in a simulation study and show that our methods are able to on average achieve a 13% improvement in terms of total reward.
DOI:10.48550/arxiv.2403.17073