Extensions to the Proximal Distance Method of Constrained Optimization

The current paper studies the problem of minimizing a loss ( ) subject to constraints of the form ∈ , where is a closed set, convex or not, and is a matrix that fuses parameters. Fusion constraints can capture smoothness, sparsity, or more general constraint patterns. To tackle this generic class of...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of machine learning research 2022-01, Vol.23
Hauptverfasser: Landeros, Alfonso, Padilla, Oscar Hernan Madrid, Zhou, Hua, Lange, Kenneth
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The current paper studies the problem of minimizing a loss ( ) subject to constraints of the form ∈ , where is a closed set, convex or not, and is a matrix that fuses parameters. Fusion constraints can capture smoothness, sparsity, or more general constraint patterns. To tackle this generic class of problems, we combine the Beltrami-Courant penalty method of optimization with the proximal distance principle. The latter is driven by minimization of penalized objectives involving large tuning constants and the squared Euclidean distance of from . The next iterate of the corresponding proximal distance algorithm is constructed from the current iterate by minimizing the majorizing surrogate function . For fixed and a subanalytic loss ( ) and a subanalytic constraint set , we prove convergence to a stationary point. Under stronger assumptions, we provide convergence rates and demonstrate linear local convergence. We also construct a steepest descent (SD) variant to avoid costly linear system solves. To benchmark our algorithms, we compare their results to those delivered by the alternating direction method of multipliers (ADMM). Our extensive numerical tests include problems on metric projection, convex regression, convex clustering, total variation image denoising, and projection of a matrix to a good condition number. These experiments demonstrate the superior speed and acceptable accuracy of our steepest variant on high-dimensional problems. Julia code to replicate all of our experiments can be found at https://github.com/alanderos91/ProximalDistanceAlgorithms.jl.
ISSN:1532-4435
1533-7928
DOI:10.1016/0167-2789(92)90242