Enhanced Frame and Event-Based Simulator and Event-Based Video Interpolation Network
Fast neuromorphic event-based vision sensors (Dynamic Vision Sensor, DVS) can be combined with slower conventional frame-based sensors to enable higher-quality inter-frame interpolation than traditional methods relying on fixed motion approximations using e.g. optical flow. In this work we present a...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Fast neuromorphic event-based vision sensors (Dynamic Vision Sensor, DVS) can
be combined with slower conventional frame-based sensors to enable
higher-quality inter-frame interpolation than traditional methods relying on
fixed motion approximations using e.g. optical flow. In this work we present a
new, advanced event simulator that can produce realistic scenes recorded by a
camera rig with an arbitrary number of sensors located at fixed offsets. It
includes a new configurable frame-based image sensor model with realistic image
quality reduction effects, and an extended DVS model with more accurate
characteristics. We use our simulator to train a novel reconstruction model
designed for end-to-end reconstruction of high-fps video. Unlike previously
published methods, our method does not require the frame and DVS cameras to
have the same optics, positions, or camera resolutions. It is also not limited
to objects a fixed distance from the sensor. We show that data generated by our
simulator can be used to train our new model, leading to reconstructed images
on public datasets of equivalent or better quality than the state of the art.
We also show our sensor generalizing to data recorded by real sensors. |
---|---|
DOI: | 10.48550/arxiv.2112.09379 |