Full-resolution image restoration for light field images via a spatial shift-variant degradation network

The light field (LF) imaging systems face a trade-off between the spatial and angular resolution in a limited sensor resolution. Various networks have been proposed to enhance the spatial resolution of the sub-aperture image (SAI). However, the spatial shift-variant characteristics of the LF are not...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Optics express 2024-02, Vol.32 (4), p.5362-5379
Hauptverfasser: Zhu, Conghui, Jiang, Yi, Yuan, Yan, Su, Lijuan, Yin, Xiaorui, Kong, Deqian
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The light field (LF) imaging systems face a trade-off between the spatial and angular resolution in a limited sensor resolution. Various networks have been proposed to enhance the spatial resolution of the sub-aperture image (SAI). However, the spatial shift-variant characteristics of the LF are not considered, and few efforts have been made to recover a full-resolution (FR) image. In this paper, we propose an FR image restoration method by embedding LF degradation kernels into the network. An explicit convolution model based on the scalar diffraction theory is first derived to calculate the system response and imaging matrix. Based on the analysis of LF image formation, we establish the mapping from an FR image to the SAI through the SAI kernel, which is a spatial shift-variant degradation (SSVD) kernel. Then, the SSVD kernels are embedded into the proposed network as prior knowledge. An SSVD convolution layer is specially designed to handle the view-wise degradation feature and speed up the training process. A refinement block is designed to preserve the entire image details. Moreover, our network is evaluated on extensive simulated and real-world LF images to demonstrate its superior performance compared with other methods. Experiments on a multi-focus scene further prove that our network is suitable for any in-focus or defocused conditions.
ISSN:1094-4087
1094-4087
DOI:10.1364/OE.506541