Gaussian Markov Random Fields for Discrete Optimization via Simulation: Framework and Algorithms
This paper lays the foundation for employing Gaussian Markov random fields (GMRFs) for discrete decision–variable optimization via simulation; that is, optimizing the performance of a simulated system. Gaussian processes have gained popularity for inferential optimization, which iteratively updates...
Gespeichert in:
Veröffentlicht in: | Operations research 2019-01, Vol.67 (1), p.250-266 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper lays the foundation for employing Gaussian Markov random fields (GMRFs) for discrete decision–variable optimization via simulation; that is, optimizing the performance of a simulated system. Gaussian processes have gained popularity for inferential optimization, which iteratively updates a model of the simulated solutions and selects the next solution to simulate by relying on statistical inference from that model. We show that, for a discrete problem, GMRFs, a type of Gaussian process defined on a graph, provides better inference on the remaining optimality gap than the typical choice of continuous Gaussian process and thereby enables the algorithm to search efficiently and stop correctly when the remaining optimality gap is below a predefined threshold. We also introduce the concept of multiresolution GMRFs for large-scale problems, with which GMRFs of different resolutions interact to efficiently focus the search on promising regions of solutions.
We consider optimizing the expected value of some performance measure of a dynamic stochastic simulation with a statistical guarantee for optimality when the decision variables are
discrete
, in particular, integer-ordered; the number of feasible solutions is large; and the model execution is too slow to simulate even a substantial fraction of them. Our goal is to create algorithms that stop searching when they can provide inference about the remaining optimality gap similar to the correct-selection guarantee of ranking and selection when it simulates all solutions. Further, our algorithm remains competitive with fixed-budget algorithms that search efficiently but do not provide such inference. To accomplish this we learn and exploit spatial relationships among the decision variables and objective function values using a Gaussian Markov random field (GMRF). Gaussian random fields on continuous domains are already used in deterministic and stochastic optimization because they facilitate the computation of measures, such as expected improvement, that balance exploration and exploitation. We show that GMRFs are particularly well suited to the discrete decision–variable problem, from both a modeling and a computational perspective. Specifically, GMRFs permit the definition of a sensible neighborhood structure, and they are defined by their precision matrices, which can be constructed to be sparse. Using this framework, we create both single and multiresolution algorithms, prove the asymptotic convergenc |
---|---|
ISSN: | 0030-364X 1526-5463 |
DOI: | 10.1287/opre.2018.1778 |