Learning a Large Neighborhood Search Algorithm for Mixed Integer Programs
Large Neighborhood Search (LNS) is a combinatorial optimization heuristic that starts with an assignment of values for the variables to be optimized, and iteratively improves it by searching a large neighborhood around the current assignment. In this paper we consider a learning-based LNS approach f...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Neighborhood Search (LNS) is a combinatorial optimization heuristic
that starts with an assignment of values for the variables to be optimized, and
iteratively improves it by searching a large neighborhood around the current
assignment. In this paper we consider a learning-based LNS approach for mixed
integer programs (MIPs). We train a Neural Diving model to represent a
probability distribution over assignments, which, together with an
off-the-shelf MIP solver, generates an initial assignment. Formulating the
subsequent search steps as a Markov Decision Process, we train a Neural
Neighborhood Selection policy to select a search neighborhood at each step,
which is searched using a MIP solver to find the next assignment. The policy
network is trained using imitation learning. We propose a target policy for
imitation that, given enough compute resources, is guaranteed to select the
neighborhood containing the optimal next assignment amongst all possible
choices for the neighborhood of a specified size. Our approach matches or
outperforms all the baselines on five real-world MIP datasets with large-scale
instances from diverse applications, including two production applications at
Google. It achieves $2\times$ to $37.8\times$ better average primal gap than
the best baseline on three of the datasets at large running times. |
---|---|
DOI: | 10.48550/arxiv.2107.10201 |