Online Estimation of Articulated Objects with Factor Graphs using Vision and Proprioceptive Sensing
From dishwashers to cabinets, humans interact with articulated objects every day, and for a robot to assist in common manipulation tasks, it must learn a representation of articulation. Recent deep learning learning methods can provide powerful vision-based priors on the affordance of articulated ob...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | From dishwashers to cabinets, humans interact with articulated objects every
day, and for a robot to assist in common manipulation tasks, it must learn a
representation of articulation. Recent deep learning learning methods can
provide powerful vision-based priors on the affordance of articulated objects
from previous, possibly simulated, experiences. In contrast, many works
estimate articulation by observing the object in motion, requiring the robot to
already be interacting with the object. In this work, we propose to use the
best of both worlds by introducing an online estimation method that merges
vision-based affordance predictions from a neural network with interactive
kinematic sensing in an analytical model. Our work has the benefit of using
vision to predict an articulation model before touching the object, while also
being able to update the model quickly from kinematic sensing during the
interaction. In this paper, we implement a full system using shared autonomy
for robotic opening of articulated objects, in particular objects in which the
articulation is not apparent from vision alone. We implemented our system on a
real robot and performed several autonomous closed-loop experiments in which
the robot had to open a door with unknown joint while estimating the
articulation online. Our system achieved an 80% success rate for autonomous
opening of unknown articulated objects. |
---|---|
DOI: | 10.48550/arxiv.2309.16343 |