Towards Real-World Video Deblurring by Exploring Blur Formation Process
This paper aims at exploring how to synthesize close-to-real blurs that existing video deblurring models trained on them can generalize well to real-world blurry videos. In recent years, deep learning-based approaches have achieved promising success on video deblurring task. However, the models trai...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper aims at exploring how to synthesize close-to-real blurs that
existing video deblurring models trained on them can generalize well to
real-world blurry videos. In recent years, deep learning-based approaches have
achieved promising success on video deblurring task. However, the models
trained on existing synthetic datasets still suffer from generalization
problems over real-world blurry scenarios with undesired artifacts. The factors
accounting for the failure remain unknown. Therefore, we revisit the classical
blur synthesis pipeline and figure out the possible reasons, including shooting
parameters, blur formation space, and image signal processor~(ISP). To analyze
the effects of these potential factors, we first collect an ultra-high
frame-rate (940 FPS) RAW video dataset as the data basis to synthesize various
kinds of blurs. Then we propose a novel realistic blur synthesis pipeline
termed as RAW-Blur by leveraging blur formation cues. Through numerous
experiments, we demonstrate that synthesizing blurs in the RAW space and
adopting the same ISP as the real-world testing data can effectively eliminate
the negative effects of synthetic data. Furthermore, the shooting parameters of
the synthesized blurry video, e.g., exposure time and frame-rate play
significant roles in improving the performance of deblurring models.
Impressively, the models trained on the blurry data synthesized by the proposed
RAW-Blur pipeline can obtain more than 5dB PSNR gain against those trained on
the existing synthetic blur datasets. We believe the novel realistic synthesis
pipeline and the corresponding RAW video dataset can help the community to
easily construct customized blur datasets to improve real-world video
deblurring performance largely, instead of laboriously collecting real data
pairs. |
---|---|
DOI: | 10.48550/arxiv.2208.13184 |