Learning Kinematic Descriptions using SPARE: Simulated and Physical ARticulated Extendable dataset
Next generation robots will need to understand intricate and articulated objects as they cooperate in human environments. To do so, these robots will need to move beyond their current abilities--- working with relatively simple objects in a task-indifferent manner--- toward more sophisticated abilit...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Next generation robots will need to understand intricate and articulated
objects as they cooperate in human environments. To do so, these robots will
need to move beyond their current abilities--- working with relatively simple
objects in a task-indifferent manner--- toward more sophisticated abilities
that dynamically estimate the properties of complex, articulated objects. To
that end, we make two compelling contributions toward general articulated
(physical) object understanding in this paper. First, we introduce a new
dataset, SPARE: Simulated and Physical ARticulated Extendable dataset. SPARE is
an extendable open-source dataset providing equivalent simulated and physical
instances of articulated objects (kinematic chains), providing the greater
research community with a training and evaluation tool for methods generating
kinematic descriptions of articulated objects. To the best of our knowledge,
this is the first joint visual and physical (3D-printable) dataset for the
Vision community. Second, we present a deep neural network that can predit the
number of links and the length of the links of an articulated object. These new
ideas outperform classical approaches to understanding kinematic chains, such
tracking-based methods, which fail in the case of occlusion and do not leverage
multiple views when available. |
---|---|
DOI: | 10.48550/arxiv.1803.11147 |