Implementation of an optimal first-order method for strongly convex total variation regularization

We present a practical implementation of an optimal first-order method, due to Nesterov, for large-scale total variation regularization in tomographic reconstruction, image deblurring, etc. The algorithm applies to μ -strongly convex objective functions with L -Lipschitz continuous gradient. In the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:BIT (Nordisk Tidskrift for Informationsbehandling) 2012-06, Vol.52 (2), p.329-356
Hauptverfasser: Jensen, T. L., Jørgensen, J. H., Hansen, P. C., Jensen, S. H.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We present a practical implementation of an optimal first-order method, due to Nesterov, for large-scale total variation regularization in tomographic reconstruction, image deblurring, etc. The algorithm applies to μ -strongly convex objective functions with L -Lipschitz continuous gradient. In the framework of Nesterov both μ and L are assumed known—an assumption that is seldom satisfied in practice. We propose to incorporate mechanisms to estimate locally sufficient μ and L during the iterations. The mechanisms also allow for the application to non-strongly convex functions. We discuss the convergence rate and iteration complexity of several first-order methods, including the proposed algorithm, and we use a 3D tomography problem to compare the performance of these methods. In numerical simulations we demonstrate the advantage in terms of faster convergence when estimating the strong convexity parameter μ for solving ill-conditioned problems to high accuracy, in comparison with an optimal method for non-strongly convex problems and a first-order method with Barzilai-Borwein step size selection.
ISSN:0006-3835
1572-9125
DOI:10.1007/s10543-011-0359-8