Optimization by Parallel Quasi-Quantum Annealing with Gradient-Based Sampling
Learning-based methods have gained attention as general-purpose solvers due to their ability to automatically learn problem-specific heuristics, reducing the need for manually crafted heuristics. However, these methods often face scalability challenges. To address these issues, the improved Sampling...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Learning-based methods have gained attention as general-purpose solvers due
to their ability to automatically learn problem-specific heuristics, reducing
the need for manually crafted heuristics. However, these methods often face
scalability challenges. To address these issues, the improved Sampling
algorithm for Combinatorial Optimization (iSCO), using discrete Langevin
dynamics, has been proposed, demonstrating better performance than several
learning-based solvers. This study proposes a different approach that
integrates gradient-based update through continuous relaxation, combined with
Quasi-Quantum Annealing (QQA). QQA smoothly transitions the objective function,
starting from a simple convex function, minimized at half-integral values, to
the original objective function, where the relaxed variables are minimized only
in the discrete space. Furthermore, we incorporate parallel run communication
leveraging GPUs to enhance exploration capabilities and accelerate convergence.
Numerical experiments demonstrate that our method is a competitive
general-purpose solver, achieving performance comparable to iSCO and
learning-based solvers across various benchmark problems. Notably, our method
exhibits superior speed-quality trade-offs for large-scale instances compared
to iSCO, learning-based solvers, commercial solvers, and specialized
algorithms. |
---|---|
DOI: | 10.48550/arxiv.2409.02135 |