Towards Generalized Manipulation Learning Through Grasp Mechanics-Based Features and Self-Supervision

Learning accurate representations of robot models remains a challenging problem, and is typically approached though large, system-specific feature sets. This method inherently introduces practical shortcomings, as interpretability and transferability of the learned model typically decreases as more...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on robotics 2021-10, Vol.37 (5), p.1553-1569
Hauptverfasser: Morgan, Andrew S., Bircher, Walter G., Dollar, Aaron M.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Learning accurate representations of robot models remains a challenging problem, and is typically approached though large, system-specific feature sets. This method inherently introduces practical shortcomings, as interpretability and transferability of the learned model typically decreases as more features are introduced into the learning framework in order to handle increasing task complexity. In this article, we examine the problem of developing transferable learned models for dexterous manipulation that are able to accurately predict the behavior of physically distinct systems without retraining. We introduce the notion of learning from visually-extracted grasp mechanics-based features, which are formulated by combining geometrically-inspired, analytical representations of the gripper into the feature set to more holistically represent the state of varied systems performing manipulation. We characterize the added utility of using such features through simulation and incorporate them into a classifier to predict specific phenomena, or modes of manipulation, that occur during prehensile within-hand movement. Four modes of manipulation-normal (rolling contact), drop, stuck, and sliding-are defined, collected physically, and trained via a self-supervised learning approach. The classifier is first trained on a single sensorless underactuated hand variant for all four modes. We, then, investigate the transferability of the learned classifier on five different planar gripper variants-analyzing applicability of this approach with both online and offline evaluation.
ISSN:1552-3098
1941-0468
DOI:10.1109/TRO.2021.3057802