Adaptive Importance Sampling for Efficient Stochastic Root Finding and Quantile Estimation
Stochastic root-finding problems are fundamental in the fields of operations research and data science. However, when the root-finding problem involves rare events, crude Monte Carlo can be prohibitively inefficient. Importance sampling (IS) is a commonly used approach, but selecting a good IS param...
Gespeichert in:
Veröffentlicht in: | Operations research 2024-11, Vol.72 (6), p.2612-2630 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Stochastic root-finding problems are fundamental in the fields of operations research and data science. However, when the root-finding problem involves rare events, crude Monte Carlo can be prohibitively inefficient. Importance sampling (IS) is a commonly used approach, but selecting a good IS parameter requires knowledge of the problem’s solution, which creates a circular challenge. In “Adaptive Importance Sampling for Efficient Stochastic Root Finding and Quantile Estimation,” He, Jiang, Lam, and Fu propose an adaptive IS approach to untie this circularity. The adaptive IS simultaneously estimates the root and the IS parameters, and can be embedded in sample average approximation–type algorithms and stochastic approximation–type algorithms. They provide theoretical analysis on strong consistency and asymptotic normality of the resulting estimators, and show the benefit of adaptivity from a worst-case perspective. They also provide specialized analyses on extreme quantile estimation under milder conditions.
In solving simulation-based stochastic root-finding or optimization problems that involve rare events, such as in extreme quantile estimation, running crude Monte Carlo can be prohibitively inefficient. To address this issue, importance sampling can be employed to drive down the sampling error to a desirable level. However, selecting a good importance sampler requires knowledge of the solution to the problem at hand, which is the goal to begin with and thus forms a circular challenge. We investigate the use of adaptive importance sampling to untie this circularity. Our procedure sequentially updates the importance sampler to reach the optimal sampler and the optimal solution simultaneously, and can be embedded in both sample-average-approximation-type algorithms and stochastic-approximation-type algorithms. Our theoretical analysis establishes strong consistency and asymptotic normality of the resulting estimators. We also demonstrate, via a minimax perspective, the key role of using adaptivity in controlling asymptotic errors. Finally, we illustrate the effectiveness of our approach via numerical experiments.
Funding:
This work was supported by the National Natural Science Foundation of China [Grants 72293562, 72121001, and 72171060], the National Science Foundation [Grants CAREER CMMI-1834710 and IIS-1849280], and the Air Force Office of Scientific Research [Grant FA95502010211].
Supplemental Material:
The e-companion is available at
https://doi.org/ |
---|---|
ISSN: | 0030-364X 1526-5463 |
DOI: | 10.1287/opre.2023.2484 |