Nonconvex Optimization via MM Algorithms: Convergence Theory

The majorization-minimization (MM) principle is an extremely general framework for deriving optimization algorithms. It includes the expectation-maximization (EM) algorithm, proximal gradient algorithm, concave-convex procedure, quadratic lower bound algorithm, and proximal distance algorithm as spe...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-06
Hauptverfasser: Lange, Kenneth, Joong-Ho, Won, Landeros, Alfonso, Zhou, Hua
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The majorization-minimization (MM) principle is an extremely general framework for deriving optimization algorithms. It includes the expectation-maximization (EM) algorithm, proximal gradient algorithm, concave-convex procedure, quadratic lower bound algorithm, and proximal distance algorithm as special cases. Besides numerous applications in statistics, optimization, and imaging, the MM principle finds wide applications in large scale machine learning problems such as matrix completion, discriminant analysis, and nonnegative matrix factorizations. When applied to nonconvex optimization problems, MM algorithms enjoy the advantages of convexifying the objective function, separating variables, numerical stability, and ease of implementation. However, compared to the large body of literature on other optimization algorithms, the convergence analysis of MM algorithms is scattered and problem specific. This survey presents a unified treatment of the convergence of MM algorithms. With modern applications in mind, the results encompass non-smooth objective functions and non-asymptotic analysis.
ISSN:2331-8422
DOI:10.48550/arxiv.2106.02805