Contextual Bandits in a Survey Experiment on Charitable Giving: Within-Experiment Outcomes versus Policy Learning
We design and implement an adaptive experiment (a ``contextual bandit'') to learn a targeted treatment assignment policy, where the goal is to use a participant's survey responses to determine which charity to expose them to in a donation solicitation. The design balances two competin...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We design and implement an adaptive experiment (a ``contextual bandit'') to
learn a targeted treatment assignment policy, where the goal is to use a
participant's survey responses to determine which charity to expose them to in
a donation solicitation. The design balances two competing objectives:
optimizing the outcomes for the subjects in the experiment (``cumulative regret
minimization'') and gathering data that will be most useful for policy
learning, that is, for learning an assignment rule that will maximize welfare
if used after the experiment (``simple regret minimization''). We evaluate
alternative experimental designs by collecting pilot data and then conducting a
simulation study. Next, we implement our selected algorithm. Finally, we
perform a second simulation study anchored to the collected data that evaluates
the benefits of the algorithm we chose. Our first result is that the value of a
learned policy in this setting is higher when data is collected via a uniform
randomization rather than collected adaptively using standard cumulative regret
minimization or policy learning algorithms. We propose a simple heuristic for
adaptive experimentation that improves upon uniform randomization from the
perspective of policy learning at the expense of increasing cumulative regret
relative to alternative bandit algorithms. The heuristic modifies an existing
contextual bandit algorithm by (i) imposing a lower bound on assignment
probabilities that decay slowly so that no arm is discarded too quickly, and
(ii) after adaptively collecting data, restricting policy learning to select
from arms where sufficient data has been gathered. |
---|---|
DOI: | 10.48550/arxiv.2211.12004 |