Smoothed $f$-Divergence Distributionally Robust Optimization
In data-driven optimization, sample average approximation (SAA) is known to suffer from the so-called optimizer's curse that causes an over-optimistic evaluation of the solution performance. We argue that a special type of distributionallly robust optimization (DRO) formulation offers theoretic...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In data-driven optimization, sample average approximation (SAA) is known to
suffer from the so-called optimizer's curse that causes an over-optimistic
evaluation of the solution performance. We argue that a special type of
distributionallly robust optimization (DRO) formulation offers theoretical
advantages in correcting for this optimizer's curse compared to simple
``margin'' adjustments to SAA and other DRO approaches: It attains a
statistical bound on the out-of-sample performance, for a wide class of
objective functions and distributions, that is nearly tightest in terms of
exponential decay rate. This DRO uses an ambiguity set based on a Kullback
Leibler (KL) divergence smoothed by the Wasserstein or L\'evy-Prokhorov (LP)
distance via a suitable distance optimization. Computationally, we also show
that such a DRO, and its generalized versions using smoothed $f$-divergence,
are not harder than DRO problems based on $f$-divergence or Wasserstein
distances, rendering our DRO formulations both statistically optimal and
computationally viable. |
---|---|
DOI: | 10.48550/arxiv.2306.14041 |