Offline Multi-Action Policy Learning: Generalization and Optimization
As a result of digitization of the economy, more and more decision makers from a wide range of domains have gained the ability to target products, services, and information provision based on individual characteristics. Examples include selecting offers, prices, advertisements, or emails to send to...
Gespeichert in:
Veröffentlicht in: | Operations research 2023-01, Vol.71 (1), p.148-183 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As a result of digitization of the economy, more and more decision makers from a wide range of domains have gained the ability to target products, services, and information provision based on individual characteristics. Examples include selecting offers, prices, advertisements, or emails to send to consumers, choosing a bid to submit in a contextual first-price auctions, and determining which medication to prescribe to a patient. The key to enabling this is to learn a treatment policy from historical observational data in a sample-efficient way, hence uncovering the best personalized treatment choice recommendation. In “Offline Policy Learning: Generalization and Optimization,” Z. Zhou, S. Athey, and S. Wager provide a sample-optimal policy learning algorithm that is computationally efficient and that learns a tree-based treatment policy from observational data. In our quest toward fully automated personalization, the work provides a theoretically sound and practically implementable approach.
In many settings, a decision maker wishes to learn a rule, or policy, that maps from observable characteristics of an individual to an action. Examples include selecting offers, prices, advertisements, or emails to send to consumers, choosing a bid to submit in a contextual first-price auctions, and determining which medication to prescribe to a patient. In this paper, we study the offline multi-action policy learning problem with observational data and where the policy may need to respect budget constraints or belong to a restricted policy class such as decision trees. By using the standard augmented inverse propensity weight estimator, we design and implement a policy learning algorithm that achieves asymptotically minimax-optimal regret. To the best of our knowledge, this is the first result of this type in the multi-action setup, and it provides a substantial performance improvement over the existing learning algorithms. We then consider additional computational challenges that arise in implementing our method for the case where the policy is restricted to take the form of a decision tree. We propose two different approaches: one using a mixed integer program formulation and the other using a tree-search based algorithm.
Funding:
This work was supported by the National Science Foundation [Grants CCF-2106508 and DMS-1916163] and the Office of Naval Research [Grant N00014-17-1-2131]. |
---|---|
ISSN: | 0030-364X 1526-5463 |
DOI: | 10.1287/opre.2022.2271 |