Langevin Dynamics Based Algorithm e-THεO POULA for Stochastic Optimization Problems with Discontinuous Stochastic Gradient
We introduce a new Langevin dynamics based algorithm, called the extended tamed hybrid ε -order polygonal unadjusted Langevin algorithm (e-TH ε O POULA), to solve optimization problems with discontinuous stochastic gradients, which naturally appear in real-world applications such as quantile estimat...
Gespeichert in:
Veröffentlicht in: | Mathematics of operations research 2024-09 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We introduce a new Langevin dynamics based algorithm, called the extended tamed hybrid
ε
-order polygonal unadjusted Langevin algorithm (e-TH
ε
O POULA), to solve optimization problems with discontinuous stochastic gradients, which naturally appear in real-world applications such as quantile estimation, vector quantization, conditional value at risk (CVaR) minimization, and regularized optimization problems involving rectified linear unit (ReLU) neural networks. We demonstrate both theoretically and numerically the applicability of the e-TH
ε
O POULA algorithm. More precisely, under the conditions that the stochastic gradient is locally Lipschitz
in average
and satisfies a certain convexity at infinity condition, we establish nonasymptotic error bounds for e-TH
ε
O POULA in Wasserstein distances and provide a nonasymptotic estimate for the expected excess risk, which can be controlled to be arbitrarily small. Three key applications in finance and insurance are provided, namely, multiperiod portfolio optimization, transfer learning in multiperiod portfolio optimization, and insurance claim prediction, which involve neural networks with (Leaky)-ReLU activation functions. Numerical experiments conducted using real-world data sets illustrate the superior empirical performance of e-TH
ε
O POULA compared with SGLD (stochastic gradient Langevin dynamics), TUSLA (tamed unadjusted stochastic Langevin algorithm), adaptive moment estimation, and Adaptive Moment Estimation with a Strongly Non-Convex Decaying Learning Rate in terms of model accuracy.
Funding:
Financial support was provided by the Alan Turing Institute, London, under the Engineering and Physical Sciences Research Council [Grant EP/N510129/1]; the Ministry of Education of Singapore Academic Research Fund [Tier 2 Grant MOE-T2EP20222-0013]; the European Union’s Horizon 2020 Research and Innovation Programme [Marie Skłodowska-Curie Grant Agreement 801215]; the University of Edinburgh’s Data-Driven Innovation Programme, part of the Edinburgh and South East Scotland City Region Deal; an Institute of Information and Communications Technology Planning and Evaluation grant funded by the Korean Ministry of Science and ICT (MIST) [Grant 2020-0-01336]; the Artificial Intelligence Graduate School Program of the Ulsan National Institute of Science and Technology; a National Research Foundation of Korea grant funded by the Korean government (MSIT) [Grant RS-2023-00253002]; and the Guangzhou–Hong Kong University of Sci |
---|---|
ISSN: | 0364-765X 1526-5471 |
DOI: | 10.1287/moor.2022.0307 |