Analyzing and Improving Greedy 2-Coordinate Updates for Equality-Constrained Optimization via Steepest Descent in the 1-Norm
We consider minimizing a smooth function subject to a summation constraint over its variables. By exploiting a connection between the greedy 2-coordinate update for this problem and equality-constrained steepest descent in the 1-norm, we give a convergence rate for greedy selection under a proximal...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We consider minimizing a smooth function subject to a summation constraint
over its variables. By exploiting a connection between the greedy 2-coordinate
update for this problem and equality-constrained steepest descent in the
1-norm, we give a convergence rate for greedy selection under a proximal
Polyak-Lojasiewicz assumption that is faster than random selection and
independent of the problem dimension $n$. We then consider minimizing with both
a summation constraint and bound constraints, as arises in the support vector
machine dual problem. Existing greedy rules for this setting either guarantee
trivial progress only or require $O(n^2)$ time to compute. We show that bound-
and summation-constrained steepest descent in the L1-norm guarantees more
progress per iteration than previous rules and can be computed in only $O(n
\log n)$ time. |
---|---|
DOI: | 10.48550/arxiv.2307.01169 |