Unsupervised reconstruction with a registered time-unsheared image constraint for compressed ultrafast photography

Compressed ultrafast photography (CUP) is a computational imaging technology capable of capturing transient scenes in picosecond scale with a sequence depth of hundreds of frames. Since the inverse problem of CUP is an ill-posed problem, it is challenging to further improve the reconstruction qualit...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Optics express 2024-04, Vol.32 (9), p.16333-16350
Hauptverfasser: Zhou, Haoyu, Song, Yan, Yao, Zhiming, Hei, Dongwei, Li, Yang, Duan, Baojun, Liu, Yinong, Sheng, Liang
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Compressed ultrafast photography (CUP) is a computational imaging technology capable of capturing transient scenes in picosecond scale with a sequence depth of hundreds of frames. Since the inverse problem of CUP is an ill-posed problem, it is challenging to further improve the reconstruction quality under the condition of high noise level and compression ratio. In addition, there are many articles adding an external charge-coupled device (CCD) camera to the CUP system to form the time-unsheared view because the added constraint can improve the reconstruction quality of images. However, since the images are collected by different cameras, slight affine transformation may have great impacts on the reconstruction quality. Here, we propose an algorithm that combines the time-unsheared image constraint CUP system with unsupervised neural networks. Image registration network is also introduced into the network framework to learn the affine transformation parameters of input images. The proposed algorithm effectively utilizes the implicit image prior in the neural network as well as the extra hardware prior information brought by the time-unsheared view. Combined with image registration network, this joint learning model enables our proposed algorithm to further improve the quality of reconstructed images without training datasets. The simulation and experiment results demonstrate the application prospect of our algorithm in ultrafast event capture.
ISSN:1094-4087
1094-4087
DOI:10.1364/OE.519872