A model-free method to learn multiple skills in parallel on modular robots
Legged robots are well-suited for deployment in unstructured environments but require a unique control scheme specific for their design. As controllers optimised in simulation do not transfer well to the real world (the infamous sim-to-real gap), methods enabling quick learning in the real world, wi...
Gespeichert in:
Veröffentlicht in: | Nature communications 2024-07, Vol.15 (1), p.6267-13, Article 6267 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Legged robots are well-suited for deployment in unstructured environments but require a unique control scheme specific for their design. As controllers optimised in simulation do not transfer well to the real world (the infamous sim-to-real gap), methods enabling quick learning in the real world, without any assumptions on the specific robot model and its dynamics, are necessary. In this paper, we present a generic method based on Central Pattern Generators, that enables the acquisition of basic locomotion skills in parallel, through very few trials. The novelty of our approach, underpinned by a mathematical analysis of the controller model, is to search for good initial states, instead of optimising connection weights. Empirical validation in six different robot morphologies demonstrates that our method enables robots to learn primary locomotion skills in less than 15 minutes in the real world. In the end, we showcase our skills in a targeted locomotion experiment.
The authors present a machine learning approach to enable modular robots to learn to walk and follow objects within a short time. The method is based on bioinspired central pattern generators and is validated on six robots with different body shapes. |
---|---|
ISSN: | 2041-1723 2041-1723 |
DOI: | 10.1038/s41467-024-50131-4 |