Distributed generalization of learned planning models in robot programming by demonstration
In Programming by Demonstration (PbD), one of the key problems for autonomous learning is to automatically extract the relevant features of a manipulation task, which has a significant impact on the generalization capabilities. In this paper, task features are encoded as constraints of a learned pla...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In Programming by Demonstration (PbD), one of the key problems for autonomous learning is to automatically extract the relevant features of a manipulation task, which has a significant impact on the generalization capabilities. In this paper, task features are encoded as constraints of a learned planning model. In order to extract the relevant constraints, the human teacher demonstrates a set of tests, e.g. a scene with different objects, and the robot tries to execute the planning model on each test using constrained motion planning. Based on statistics about which constraints failed during the planning process multiple hypotheses about a maximal subset of constraints, which allows to find a solution in all tests, are refined in parallel using an evolutionary algorithm. The algorithm was tested on 7 experiments and two robot systems. |
---|---|
ISSN: | 2153-0858 2153-0866 |
DOI: | 10.1109/IROS.2011.6094717 |