Stochastic Optimization for Performative Prediction
In performative prediction, the choice of a model influences the distribution of future data, typically through actions taken based on the model's predictions. We initiate the study of stochastic optimization for performative prediction. What sets this setting apart from traditional stochastic...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In performative prediction, the choice of a model influences the distribution
of future data, typically through actions taken based on the model's
predictions.
We initiate the study of stochastic optimization for performative prediction.
What sets this setting apart from traditional stochastic optimization is the
difference between merely updating model parameters and deploying the new
model. The latter triggers a shift in the distribution that affects future
data, while the former keeps the distribution as is.
Assuming smoothness and strong convexity, we prove rates of convergence for
both greedily deploying models after each stochastic update (greedy deploy) as
well as for taking several updates before redeploying (lazy deploy). In both
cases, our bounds smoothly recover the optimal $O(1/k)$ rate as the strength of
performativity decreases. Furthermore, they illustrate how depending on the
strength of performative effects, there exists a regime where either approach
outperforms the other. We experimentally explore the trade-off on both
synthetic data and a strategic classification simulator. |
---|---|
DOI: | 10.48550/arxiv.2006.06887 |