Deep Predictive Learning: Motion Learning Concept inspired by Cognitive Robotics

Bridging the gap between motion models and reality is crucial by using limited data to deploy robots in the real world. Deep learning is expected to be generalized to diverse situations while reducing feature design costs through end-to-end learning for environmental recognition and motion generatio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Suzuki, Kanata, Ito, Hiroshi, Yamada, Tatsuro, Kase, Kei, Ogata, Tetsuya
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Bridging the gap between motion models and reality is crucial by using limited data to deploy robots in the real world. Deep learning is expected to be generalized to diverse situations while reducing feature design costs through end-to-end learning for environmental recognition and motion generation. However, data collection for model training is costly, and time and human resources are essential for robot trial-and-error with physical contact. We propose "Deep Predictive Learning," a motion learning concept that predicts the robot's sensorimotor dynamics, assuming imperfections in the prediction model. The predictive coding theory inspires this concept to solve the above problems. It is based on the fundamental strategy of predicting the near-future sensorimotor states of robots and online minimization of the prediction error between the real world and the model. Based on the acquired sensor information, the robot can adjust its behavior in real time, thereby tolerating the difference between the learning experience and reality. Additionally, the robot was expected to perform a wide range of tasks by combining the motion dynamics embedded in the model. This paper describes the proposed concept, its implementation, and examples of its applications in real robots. The code and documents are available at: https://ogata-lab.github.io/eipl-docs
DOI:10.48550/arxiv.2306.14714