Biased thermodynamics can explain the behaviour of smart optimization algorithms that work above the dynamical threshold
Random constraint satisfaction problems can display a very rich structure in the space of solutions, with often an ergodicity breaking -- also known as clustering or dynamical -- transition preceding the satisfiability threshold when the constraint-to-variables ratio \(\alpha\) is increased. However...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-03 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Random constraint satisfaction problems can display a very rich structure in the space of solutions, with often an ergodicity breaking -- also known as clustering or dynamical -- transition preceding the satisfiability threshold when the constraint-to-variables ratio \(\alpha\) is increased. However, smart algorithms start to fail finding solutions in polynomial time at some threshold \(\alpha_{\rm alg}\) which is algorithmic dependent and generally bigger than the dynamical one \(\alpha_d\). The reason for this discrepancy is due to the fact that \(\alpha_d\) is traditionally computed according to the uniform measure over all the solutions. Thus, while bounding the region where a uniform sampling of the solutions is easy, it cannot predict the performance of off-equilibrium processes, that are still able of finding atypical solutions even beyond \(\alpha_d\). Here we show that a reconciliation between algorithmic behaviour and thermodynamic prediction is nonetheless possible at least up to some threshold \(\alpha_d^{\rm opt}\geq\alpha_d\), which is defined as the maximum value of the dynamical threshold computed on all possible probability measures over the solutions. We consider a simple Monte Carlo-based optimization algorithm, which is restricted to the solution space, and we demonstrate that sampling the equilibrium distribution of a biased measure improving on \(\alpha_d\) is still possible even beyond the ergodicity breaking point for the uniform measure, where other algorithms hopelessly enter the out-of-equilibrium regime. The conjecture we put forward is that many smart algorithms sample the solution space according to a biased measure: once this measure is identified, the algorithmic threshold is given by the corresponding ergodicity-breaking transition. |
---|---|
ISSN: | 2331-8422 |