Learning Articulated Constraints From a One-Shot Demonstration for Robot Manipulation Planning

Robots manipulating in domestic environments generally need to interact with articulated objects, such as doors, drawers, laptops and swivel chairs. The rigid bodies that make up these objects are connected by a revolute pair or a prismatic pair. Robots are expected to learn and understand the objec...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2019, Vol.7, p.172584-172596
Hauptverfasser: Liu, Yizhou, Zha, Fusheng, Sun, Lining, Li, Jingxuan, Li, Mantian, Wang, Xin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Robots manipulating in domestic environments generally need to interact with articulated objects, such as doors, drawers, laptops and swivel chairs. The rigid bodies that make up these objects are connected by a revolute pair or a prismatic pair. Robots are expected to learn and understand the objects' articulated constraints with a simple interaction method. In this way, the autonomy of robot manipulation will be greatly improved in an environment with unstructured constraints. In this paper, a method is proposed to obtain the articulated objects' constraint model by learning from a one-shot continuous visual demonstration which contains multistep movements, and this enables human teacher to continuously demonstrate several tasks at once without manual segmentation. At the end of this paper, a six-degree-of-freedom robot uses the constraint model obtained by demonstration learning to achieve manipulation planning of various tasks based on the AG-CBiRRT algorithm.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2019.2953894