LED: A Large-scale Real-world Paired Dataset for Event Camera Denoising
Event camera has significant advantages in capturing dynamic scene information while being prone to noise interference, particularly in challenging conditions like low threshold and low illumination. However, most existing research focuses on gentle situations, hindering event camera applications in...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Event camera has significant advantages in capturing dynamic scene
information while being prone to noise interference, particularly in
challenging conditions like low threshold and low illumination. However, most
existing research focuses on gentle situations, hindering event camera
applications in realistic complex scenarios. To tackle this limitation and
advance the field, we construct a new paired real-world event denoising dataset
(LED), including 3K sequences with 18K seconds of high-resolution (1200*680)
event streams and showing three notable distinctions compared to others:
diverse noise levels and scenes, larger-scale with high-resolution, and
high-quality GT. Specifically, it contains stepped parameters and varying
illumination with diverse scenarios. Moreover, based on the property of noise
events inconsistency and signal events consistency, we propose a novel
effective denoising framework(DED) using homogeneous dual events to generate
the GT with better separating noise from the raw. Furthermore, we design a
bio-inspired baseline leveraging Leaky-Integrate-and-Fire (LIF) neurons with
dynamic thresholds to realize accurate denoising. The experimental results
demonstrate that the remarkable performance of the proposed approach on
different datasets.The dataset and code are at https://github.com/Yee-Sing/led. |
---|---|
DOI: | 10.48550/arxiv.2405.19718 |