DeepRLS: A Recurrent Network Architecture with Least Squares Implicit Layers for Non-blind Image Deconvolution
In this work, we study the problem of non-blind image deconvolution and propose a novel recurrent network architecture that leads to very competitive restoration results of high image quality. Motivated by the computational efficiency and robustness of existing large scale linear solvers, we manage...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this work, we study the problem of non-blind image deconvolution and
propose a novel recurrent network architecture that leads to very competitive
restoration results of high image quality. Motivated by the computational
efficiency and robustness of existing large scale linear solvers, we manage to
express the solution to this problem as the solution of a series of adaptive
non-negative least-squares problems. This gives rise to our proposed Recurrent
Least Squares Deconvolution Network (RLSDN) architecture, which consists of an
implicit layer that imposes a linear constraint between its input and output.
By design, our network manages to serve two important purposes simultaneously.
The first is that it implicitly models an effective image prior that can
adequately characterize the set of natural images, while the second is that it
recovers the corresponding maximum a posteriori (MAP) estimate. Experiments on
publicly available datasets, comparing recent state-of-the-art methods, show
that our proposed RLSDN approach achieves the best reported performance both
for grayscale and color images for all tested scenarios. Furthermore, we
introduce a novel training strategy that can be adopted by any network
architecture that involves the solution of linear systems as part of its
pipeline. Our strategy eliminates completely the need to unroll the iterations
required by the linear solver and, thus, it reduces significantly the memory
footprint during training. Consequently, this enables the training of deeper
network architectures which can further improve the reconstruction results. |
---|---|
DOI: | 10.48550/arxiv.2112.05505 |