YCB-Ev 1.1: Event-vision dataset for 6DoF object pose estimation
Our work introduces the YCB-Ev dataset, which contains synchronized RGB-D frames and event data that enables evaluating 6DoF object pose estimation algorithms using these modalities. This dataset provides ground truth 6DoF object poses for the same 21 YCB objects that were used in the YCB-Video (YCB...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Our work introduces the YCB-Ev dataset, which contains synchronized RGB-D
frames and event data that enables evaluating 6DoF object pose estimation
algorithms using these modalities.
This dataset provides ground truth 6DoF object poses for the same 21 YCB
objects that were used in the YCB-Video (YCB-V) dataset, allowing for
cross-dataset algorithm performance evaluation.
The dataset consists of 21 synchronized event and RGB-D sequences, totalling
13,851 frames (7 minutes and 43 seconds of event data). Notably, 12 of these
sequences feature the same object arrangement as the YCB-V subset used in the
BOP challenge.
Ground truth poses are generated by detecting objects in the RGB-D frames,
interpolating the poses to align with the event timestamps, and then
transferring them to the event coordinate frame using extrinsic calibration.
Our dataset is the first to provide ground truth 6DoF pose data for event
streams. Furthermore, we evaluate the generalization capabilities of two
state-of-the-art algorithms, which were pre-trained for the BOP challenge,
using our novel YCB-V sequences.
The dataset is publicly available at https://github.com/paroj/ycbev. |
---|---|
DOI: | 10.48550/arxiv.2309.08482 |