Parallel Adaptive Survivor Selection

Ranking and selection (R&S) procedures in simulation optimization simulate every feasible solution to provide global statistical error control, often selecting a single solution in finite time that is optimal or near-optimal with high probability. By exploiting parallel computing advancements, l...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Operations research 2024-01, Vol.72 (1), p.336-354
1. Verfasser: Pei, Linda
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Ranking and selection (R&S) procedures in simulation optimization simulate every feasible solution to provide global statistical error control, often selecting a single solution in finite time that is optimal or near-optimal with high probability. By exploiting parallel computing advancements, large-scale problems with hundreds of thousands and even millions of feasible solutions are suitable for R&S. Naively parallelizing existing R&S methods originally designed for a serial computing setting is generally ineffective, however, as many of these conventional methods uphold family-wise error guarantees that suffer from multiplicity and require pairwise comparisons that present a computational bottleneck. Parallel adaptive survivor selection (PASS) is a new framework specifically designed for large-scale parallel R&S. By comparing systems to an adaptive “standard” that is learned as the algorithm progresses, PASS eliminates inferior solutions with false elimination rate control and with computationally efficient aggregate comparisons rather than pairwise comparisons. PASS satisfies desirable theoretical properties and performs effectively on realistic problems. We reconsider the ranking and selection (R&S) problem in stochastic simulation optimization in light of high-performance, parallel computing, where we take “R&S” to mean any procedure that simulates all systems (feasible solutions) to provide some statistical guarantee on the selected systems. We argue that when the number of systems is very large, and the parallel processing capability is also substantial, then neither the standard statistical guarantees such as probability of correct selection nor the usual observation-saving methods such as elimination via paired comparisons or complex budget allocation serve the experimenter well. As an alternative, we propose a guarantee on the expected false elimination rate that avoids the curse of multiplicity and a method to achieve it that is designed to scale computationally with problem size and parallel computing capacity . To facilitate this approach, we present a new mathematical representation, prove small-sample and asymptotic properties, evaluate variations of the method, and demonstrate a specific implementation on a problem with over 1 , 100 , 000 systems using only 21 parallel processors. Although we focus on inference about the best system here, our parallel adaptive survivor selection framework can be generalized to many other useful definitions
ISSN:0030-364X
1526-5463
DOI:10.1287/opre.2022.2343