SpikeCodec: An End-to-end Learned Compression Framework for Spiking Camera
Recently, the bio-inspired spike camera with continuous motion recording capability has attracted tremendous attention due to its ultra high temporal resolution imaging characteristic. Such imaging feature results in huge data storage and transmission burden compared to that of traditional camera, r...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, the bio-inspired spike camera with continuous motion recording
capability has attracted tremendous attention due to its ultra high temporal
resolution imaging characteristic. Such imaging feature results in huge data
storage and transmission burden compared to that of traditional camera, raising
severe challenge and imminent necessity in compression for spike camera
captured content. Existing lossy data compression methods could not be applied
for compressing spike streams efficiently due to integrate-and-fire
characteristic and binarized data structure. Considering the imaging principle
and information fidelity of spike cameras, we introduce an effective and robust
representation of spike streams. Based on this representation, we propose a
novel learned spike compression framework using scene recovery, variational
auto-encoder plus spike simulator. To our knowledge, it is the first
data-trained model for efficient and robust spike stream compression. Extensive
experimental results show that our method outperforms the conventional and
learning-based codecs, contributing a strong baseline for learned spike data
compression. |
---|---|
DOI: | 10.48550/arxiv.2306.14108 |