Weakly Convex Regularisers for Inverse Problems: Convergence of Critical Points and Primal-Dual Optimisation
Variational regularisation is the primary method for solving inverse problems, and recently there has been considerable work leveraging deeply learned regularisation for enhanced performance. However, few results exist addressing the convergence of such regularisation, particularly within the contex...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Variational regularisation is the primary method for solving inverse
problems, and recently there has been considerable work leveraging deeply
learned regularisation for enhanced performance. However, few results exist
addressing the convergence of such regularisation, particularly within the
context of critical points as opposed to global minimisers. In this paper, we
present a generalised formulation of convergent regularisation in terms of
critical points, and show that this is achieved by a class of weakly convex
regularisers. We prove convergence of the primal-dual hybrid gradient method
for the associated variational problem, and, given a Kurdyka-Lojasiewicz
condition, an $\mathcal{O}(\log{k}/k)$ ergodic convergence rate. Finally,
applying this theory to learned regularisation, we prove universal
approximation for input weakly convex neural networks (IWCNN), and show
empirically that IWCNNs can lead to improved performance of learned adversarial
regularisers for computed tomography (CT) reconstruction. |
---|---|
DOI: | 10.48550/arxiv.2402.01052 |