A Temporal Densely Connected Recurrent Network for Event-based Human Pose Estimation
Event camera is an emerging bio-inspired vision sensors that report per-pixel brightness changes asynchronously. It holds noticeable advantage of high dynamic range, high speed response, and low power budget that enable it to best capture local motions in uncontrolled environments. This motivates us...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Event camera is an emerging bio-inspired vision sensors that report per-pixel
brightness changes asynchronously. It holds noticeable advantage of high
dynamic range, high speed response, and low power budget that enable it to best
capture local motions in uncontrolled environments. This motivates us to unlock
the potential of event cameras for human pose estimation, as the human pose
estimation with event cameras is rarely explored. Due to the novel paradigm
shift from conventional frame-based cameras, however, event signals in a time
interval contain very limited information, as event cameras can only capture
the moving body parts and ignores those static body parts, resulting in some
parts to be incomplete or even disappeared in the time interval. This paper
proposes a novel densely connected recurrent architecture to address the
problem of incomplete information. By this recurrent architecture, we can
explicitly model not only the sequential but also non-sequential geometric
consistency across time steps to accumulate information from previous frames to
recover the entire human bodies, achieving a stable and accurate human pose
estimation from event data. Moreover, to better evaluate our model, we collect
a large scale multimodal event-based dataset that comes with human pose
annotations, which is by far the most challenging one to the best of our
knowledge. The experimental results on two public datasets and our own dataset
demonstrate the effectiveness and strength of our approach. Code can be
available online for facilitating the future research. |
---|---|
DOI: | 10.48550/arxiv.2209.07034 |