ACE: Active Learning for Causal Inference with Expensive Experiments
Experiments are the gold standard for causal inference. In many applications, experimental units can often be recruited or chosen sequentially, and the adaptive execution of such experiments may offer greatly improved inference of causal quantities over non-adaptive approaches, particularly when exp...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Experiments are the gold standard for causal inference. In many applications,
experimental units can often be recruited or chosen sequentially, and the
adaptive execution of such experiments may offer greatly improved inference of
causal quantities over non-adaptive approaches, particularly when experiments
are expensive. We thus propose a novel active learning method called ACE
(Active learning for Causal inference with Expensive experiments), which
leverages Gaussian process modeling of the conditional mean functions to guide
an informed sequential design of costly experiments. In particular, we develop
new acquisition functions for sequential design via the minimization of the
posterior variance of a desired causal estimand. Our approach facilitates
targeted learning of a variety of causal estimands, such as the average
treatment effect (ATE), the average treatment effect on the treated (ATTE), and
individualized treatment effects (ITE), and can be used for adaptive selection
of an experimental unit and/or the applied treatment. We then demonstrate in a
suite of numerical experiments the improved performance of ACE over baseline
methods for estimating causal estimands given a limited number of experiments. |
---|---|
DOI: | 10.48550/arxiv.2306.07480 |