Inferring geometric constraints in human demonstrations
Proceedings of The 2nd Conference on Robot Learning, PMLR 87:223-236, 2018 This paper presents an approach for inferring geometric constraints in human demonstrations. In our method, geometric constraint models are built to create representations of kinematic constraints such as fixed point, axial r...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Proceedings of The 2nd Conference on Robot Learning, PMLR
87:223-236, 2018 This paper presents an approach for inferring geometric constraints in human
demonstrations. In our method, geometric constraint models are built to create
representations of kinematic constraints such as fixed point, axial rotation,
prismatic motion, planar motion and others across multiple degrees of freedom.
Our method infers geometric constraints using both kinematic and force/torque
information. The approach first fits all the constraint models using kinematic
information and evaluates them individually using position, force and moment
criteria. Our approach does not require information about the constraint type
or contact geometry; it can determine both simultaneously. We present
experimental evaluations using instrumented tongs that show how constraints can
be robustly inferred in recordings of human demonstrations. |
---|---|
DOI: | 10.48550/arxiv.1810.00140 |