Asymptotic behavior of solutions: An application to stochastic NLP
In this article we study the consistency of optimal and stationary (KKT) points of a stochastic non-linear optimization problem involving expectation functionals, when the underlying probability distribution associated with the random variable is weakly approximated by a sequence of random probabili...
Gespeichert in:
Veröffentlicht in: | Mathematical programming 2022, Vol.191 (1), p.281-306 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this article we study the consistency of optimal and stationary (KKT) points of a stochastic non-linear optimization problem involving expectation functionals, when the underlying probability distribution associated with the random variable is weakly approximated by a sequence of random probability measures. The optimization model includes constraints with expectation functionals those are not captured in direct application of the previous results on optimality conditions exist in the literature. We first study the consistency of stationary points of a general NLP problem with convex and locally Lipschitz data and then apply those results to the stochastic NLP problem and stochastic minimax problem. Moreover, we derive an exponential bound for such approximations using a large deviation principle. |
---|---|
ISSN: | 0025-5610 1436-4646 |
DOI: | 10.1007/s10107-020-01554-6 |