On the Convergence of Block Majorization-Minimization Algorithms on the Grassmann Manifold

The Majorization-Minimization (MM) framework is widely used to derive efficient algorithms for specific problems that require the optimization of a cost function (which can be convex or not). It is based on a sequential optimization of a surrogate function over closed convex sets. A natural extensio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE signal processing letters 2024, Vol.31, p.1314-1318
Hauptverfasser: Lopez, Carlos Alejandro, Riba, Jaume
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The Majorization-Minimization (MM) framework is widely used to derive efficient algorithms for specific problems that require the optimization of a cost function (which can be convex or not). It is based on a sequential optimization of a surrogate function over closed convex sets. A natural extension of this framework incorporates ideas of Block Coordinate Descent (BCD) algorithms into the MM framework, also known as block MM. The rationale behind the block extension is to partition the optimization variables into several independent blocks, to obtain a surrogate for each block, and to optimize the surrogate of each block cyclically. However, known convergence proofs of the block MM are only valid under the assumption that the constraint sets are closed and convex. Hence, the global convergence of the block MM is not ensured for non-convex sets by classical proofs, which is needed in iterative schemes that naturally emerge in a wide range of subspace-based signal processing applications. For this purpose, the aim of this letter is to review the convergence proof of the block MM and extend it for blocks constrained in the Grassmann manifold.
ISSN:1070-9908
1558-2361
DOI:10.1109/LSP.2024.3396660