Calibration of Distributionally Robust Empirical Optimization Models
In “Calibration of Robust Empirical Optimization Models,” Gotoh, Kim, and Lim study the statistical properties of ɸ-divergence distributionally robust optimization with concave rewards. They show that worst-case sensitivity of the expected reward to deviations from the nominal is equal to the in-sam...
Gespeichert in:
Veröffentlicht in: | Operations research 2021-09, Vol.69 (5), p.1630-1650 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In “Calibration of Robust Empirical Optimization Models,” Gotoh, Kim, and Lim study the statistical properties of ɸ-divergence distributionally robust optimization with concave rewards. They show that worst-case sensitivity of the expected reward to deviations from the nominal is equal to the in-sample variance and that significant out-of-sample variance (sensitivity) reduction is possible with little impact on the mean if the robustness parameter is properly chosen. The authors also explain theoretically why the out-of-sample expected reward of robust solutions can sometimes “beat” that of sample average optimization, a phenomenon that has been observed empirically, and that the difference is typically small. This paper highlights that robust solutions are not “too conservative” if both mean and variance (sensitivity) are considered when selecting the size of the uncertainty set (e.g., via the bootstrap).
We study the out-of-sample properties of robust empirical optimization problems with smooth
φ
-divergence penalties and smooth concave objective functions, and we develop a theory for data-driven calibration of the nonnegative “robustness parameter”
δ
that controls the size of the deviations from the nominal model. Building on the intuition that robust optimization reduces the sensitivity of the expected reward to errors in the model by controlling the spread of the reward distribution, we show that the first-order benefit of “little bit of robustness” (i.e.,
δ
small, positive) is a significant reduction in the variance of the out-of-sample reward, whereas the corresponding impact on the mean is almost an order of magnitude smaller. One implication is that substantial variance (sensitivity) reduction is possible at little cost if the robustness parameter is properly calibrated. To this end, we introduce the notion of a robust mean-variance frontier to select the robustness parameter and show that it can be approximated using resampling methods such as the bootstrap. Our examples show that robust solutions resulting from “open-loop” calibration methods (e.g., selecting a 90% confidence level regardless of the data and objective function) can be very conservative out of sample, whereas those corresponding to the robustness parameter that optimizes an estimate of the out-of-sample expected reward (e.g., via the bootstrap) with no regard for the variance are often insufficiently robust. |
---|---|
ISSN: | 0030-364X 1526-5463 |
DOI: | 10.1287/opre.2020.2041 |