Event-based video deblurring based on image and event feature fusion

Event-based video deblurring is a method that performs deblurring by taking the event sequence data obtained from an event camera, which is composed of bio-inspired sensors, along with blurry frames as input. Event-based video deblurring has gained attention as a method that can overcome the limitat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2023-08, Vol.223, p.119917, Article 119917
Hauptverfasser: Kim, Jeongmin, Ghosh, Dipon Kumar, Jung, Yong Ju
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Event-based video deblurring is a method that performs deblurring by taking the event sequence data obtained from an event camera, which is composed of bio-inspired sensors, along with blurry frames as input. Event-based video deblurring has gained attention as a method that can overcome the limitations of conventional frame-based video deblurring. In this study, we propose a novel event-based video deblurring network based on convolution neural networks (CNNs). Unlike the existing event-based deblurring methods that only use event data, the proposed method fuses all the available information from current blurry frames, previously recovered sharp frames, and event data to deblur a video. Specifically, we propose an image and event feature fusion (IEFF) module to fuse event data with current intensity frame information. Additionally, we propose a current-frame reconstruction from previous-frame (CRP) module for acquiring a pseudo sharp frame from a previously recovered sharp frame and a fusion-based residual estimation (FRE) module, which fuses the event features with the image features of the previous sharp frame extracted from the CRP module. We demonstrate through a verification experiment using synthetic and real datasets that the proposed method has superior quantitative and qualitative results compared to state-of-the-art methods. •Event-based deblurring estimates sharp frames by fusing event and frame information.•Two pseudo sharp frames are estimated and merged to obtain a final sharp frame.•Fusion-based residual estimation module fuses all the available input information.•Quantitative and qualitative comparison on both synthetic and real event datasets.•Proposed model outperforms the existing models by significant margins.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2023.119917