Evaluating and Tuning n -fold Integer Programming
In recent years, algorithmic breakthroughs in stringology, computational social choice, scheduling, and so on, were achieved by applying the theory of so-called n -fold integer programming. An n -fold integer program (IP) has a highly uniform block structured constraint matrix. Hemmecke, Onn, and Ro...
Gespeichert in:
Veröffentlicht in: | The ACM journal of experimental algorithmics 2019-12, Vol.24, p.1-22 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In recent years, algorithmic breakthroughs in stringology, computational social choice, scheduling, and so on, were achieved by applying the theory of so-called
n
-fold integer programming. An
n
-fold integer program (IP) has a highly uniform block structured constraint matrix. Hemmecke, Onn, and Romanchuk [Math. Program., 2013] showed an algorithm with runtime Δ
O
(
rst
+
r
2
s
)
n
3
, where Δ is the largest coefficient,
r
,
s
, and
t
are dimensions of blocks of the constraint matrix and
n
is the total dimension of the IP; thus, an algorithm efficient if the blocks are of small size and with small coefficients. The algorithm works by iteratively improving a feasible solution with augmenting steps, and
n
-fold IPs have the special property that augmenting steps are guaranteed to exist in a not-too-large neighborhood. However, this algorithm has never been implemented and evaluated.
We have implemented the algorithm and learned the following along the way. The original algorithm is practically unusable, but we discover a series of improvements that make its evaluation possible. Crucially, we observe that a certain constant in the algorithm can be treated as a tuning parameter, which yields an efficient heuristic (essentially searching in a smaller-than-guaranteed neighborhood). Furthermore, the algorithm uses an overly expensive strategy to find a “best” step, while finding only an “approximately best” step is much cheaper, yet sufficient for quick convergence. Using this insight, we improve the asymptotic dependence on
n
from
n
3
to
n
2
log
n
.
Finally, we tested the behavior of the algorithm with various values of the tuning parameter and different strategies of finding improving steps. First, we show that decreasing the tuning parameter initially leads to an increased number of iterations needed for convergence and eventually to getting stuck in local optima, as expected. However, surprisingly small values of the parameter already exhibit good behavior while significantly lowering the time the algorithm spends per single iteration. Second, our new strategy for finding “approximately best” steps wildly outperforms the original construction. |
---|---|
ISSN: | 1084-6654 1084-6654 |
DOI: | 10.1145/3330137 |