An event-guided image motion deblurring method based on dark channel prior loss

In scenarios involving high-speed motion, traditional frame-based cameras frequently encounter motion blur due to extended exposure times. In contrast, event cameras, which asynchronously capture changes in pixel brightness and produce event streams, offer data with high temporal resolution at the m...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Optics and lasers in engineering 2024-10, Vol.181, p.108431, Article 108431
Hauptverfasser: Guo, Guangsha, Lv, Hengyi, Zhao, Yuchen, Liu, Hailong, Zhang, Yisa
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In scenarios involving high-speed motion, traditional frame-based cameras frequently encounter motion blur due to extended exposure times. In contrast, event cameras, which asynchronously capture changes in pixel brightness and produce event streams, offer data with high temporal resolution at the microsecond level. This paper unveils a pioneering baseline for event-guided image motion deblurring, named EGDNet, which leverages motion information from event streams to significantly reduce image motion blur. Central to our approach is the introduction of a Channel Attention Feature Extraction Network (CAFN) within the deblurring framework. CAFN is designed to precisely target and ameliorate severely blurred regions, thereby enhancing image sharpness and minimizing ghosting effects. Additionally, we propose a novel dark channel loss function, inspired by the dark channel prior, to further refine image contours and detail restoration. Complementing our methodological contributions, we also introduce a dedicated dataset for event camera deblurring. This dataset is meticulously crafted to reflect realistic event camera conditions, offering an unprecedented level of fidelity compared to existing datasets. Empirical evaluations reveal that EGDNet surpasses contemporary state-of-the-art techniques, marking a 1-2 dB improvement in peak signal-to-noise ratio. The source codes and dataset are openly accessible at https://github.com/ice-cream567/EGDNet. •Integrated physical models and deep learning for image motion deblurring.•Devised a channel attention feature extraction network to restore highly blurred image regions.•Leveraged a simulator to provide a specialized deblurred dataset for event cameras.•Enhanced visual output quality by integrating dark channel prior knowledge into network training supervision.
ISSN:0143-8166
1873-0302
DOI:10.1016/j.optlaseng.2024.108431