Higher-Order Expansion and Bartlett Correctability of Distributionally Robust Optimization
Distributionally robust optimization (DRO) is a worst-case framework for stochastic optimization under uncertainty that has drawn fast-growing studies in recent years. When the underlying probability distribution is unknown and observed from data, DRO suggests to compute the worst-case distribution...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Distributionally robust optimization (DRO) is a worst-case framework for
stochastic optimization under uncertainty that has drawn fast-growing studies
in recent years. When the underlying probability distribution is unknown and
observed from data, DRO suggests to compute the worst-case distribution within
a so-called uncertainty set that captures the involved statistical uncertainty.
In particular, DRO with uncertainty set constructed as a statistical divergence
neighborhood ball has been shown to provide a tool for constructing valid
confidence intervals for nonparametric functionals, and bears a duality with
the empirical likelihood (EL). In this paper, we show how adjusting the ball
size of such type of DRO can reduce higher-order coverage errors similar to the
Bartlett correction. Our correction, which applies to general von Mises
differentiable functionals, is more general than the existing EL literature
that only focuses on smooth function models or $M$-estimation. Moreover, we
demonstrate a higher-order "self-normalizing" property of DRO regardless of the
choice of divergence. Our approach builds on the development of a higher-order
expansion of DRO, which is obtained through an asymptotic analysis on a fixed
point equation arising from the Karush-Kuhn-Tucker conditions. |
---|---|
DOI: | 10.48550/arxiv.2108.05908 |