Deep Predictive Learning: Motion Learning Concept inspired by Cognitive Robotics

Bridging the gap between motion models and reality is crucial by using limited data to deploy robots in the real world. Deep learning is expected to be generalized to diverse situations while reducing feature design costs through end-to-end learning for environmental recognition and motion generatio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-03
Hauptverfasser: Suzuki, Kanata, Ito, Hiroshi, Yamada, Tatsuro, Kase, Kei, Ogata, Tetsuya
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Suzuki, Kanata
Ito, Hiroshi
Yamada, Tatsuro
Kase, Kei
Ogata, Tetsuya
description Bridging the gap between motion models and reality is crucial by using limited data to deploy robots in the real world. Deep learning is expected to be generalized to diverse situations while reducing feature design costs through end-to-end learning for environmental recognition and motion generation. However, data collection for model training is costly, and time and human resources are essential for robot trial-and-error with physical contact. We propose "Deep Predictive Learning," a motion learning concept that predicts the robot's sensorimotor dynamics, assuming imperfections in the prediction model. The predictive coding theory inspires this concept to solve the above problems. It is based on the fundamental strategy of predicting the near-future sensorimotor states of robots and online minimization of the prediction error between the real world and the model. Based on the acquired sensor information, the robot can adjust its behavior in real time, thereby tolerating the difference between the learning experience and reality. Additionally, the robot was expected to perform a wide range of tasks by combining the motion dynamics embedded in the model. This paper describes the proposed concept, its implementation, and examples of its applications in real robots. The code and documents are available at: https://ogata-lab.github.io/eipl-docs
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2829954608</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2829954608</sourcerecordid><originalsourceid>FETCH-proquest_journals_28299546083</originalsourceid><addsrcrecordid>eNqNi0ELgjAYhkcQJOV_GHQW1uZMu1rRoUCiu6h9ySS-rW0G_ftGROdOLzzv80xIxIVYJXnK-YzEzg2MMZ6tuZQiItUWwNDKwlV1Xj2BHqGxqLDf0JP2SuMP0FJjB8ZThc6oEND2FViP6tOddRv8zi3I9NbcHcTfnZPlfncpD4mx-jGC8_WgR4vhqnnOi0KmGcvFf9YbRsg-7g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2829954608</pqid></control><display><type>article</type><title>Deep Predictive Learning: Motion Learning Concept inspired by Cognitive Robotics</title><source>Freely Accessible Journals</source><creator>Suzuki, Kanata ; Ito, Hiroshi ; Yamada, Tatsuro ; Kase, Kei ; Ogata, Tetsuya</creator><creatorcontrib>Suzuki, Kanata ; Ito, Hiroshi ; Yamada, Tatsuro ; Kase, Kei ; Ogata, Tetsuya</creatorcontrib><description>Bridging the gap between motion models and reality is crucial by using limited data to deploy robots in the real world. Deep learning is expected to be generalized to diverse situations while reducing feature design costs through end-to-end learning for environmental recognition and motion generation. However, data collection for model training is costly, and time and human resources are essential for robot trial-and-error with physical contact. We propose "Deep Predictive Learning," a motion learning concept that predicts the robot's sensorimotor dynamics, assuming imperfections in the prediction model. The predictive coding theory inspires this concept to solve the above problems. It is based on the fundamental strategy of predicting the near-future sensorimotor states of robots and online minimization of the prediction error between the real world and the model. Based on the acquired sensor information, the robot can adjust its behavior in real time, thereby tolerating the difference between the learning experience and reality. Additionally, the robot was expected to perform a wide range of tasks by combining the motion dynamics embedded in the model. This paper describes the proposed concept, its implementation, and examples of its applications in real robots. The code and documents are available at: https://ogata-lab.github.io/eipl-docs</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Deep learning ; Errors ; Free energy ; Prediction models ; Predictions ; Robotics ; Robots ; Self-supervised learning</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>781,785</link.rule.ids></links><search><creatorcontrib>Suzuki, Kanata</creatorcontrib><creatorcontrib>Ito, Hiroshi</creatorcontrib><creatorcontrib>Yamada, Tatsuro</creatorcontrib><creatorcontrib>Kase, Kei</creatorcontrib><creatorcontrib>Ogata, Tetsuya</creatorcontrib><title>Deep Predictive Learning: Motion Learning Concept inspired by Cognitive Robotics</title><title>arXiv.org</title><description>Bridging the gap between motion models and reality is crucial by using limited data to deploy robots in the real world. Deep learning is expected to be generalized to diverse situations while reducing feature design costs through end-to-end learning for environmental recognition and motion generation. However, data collection for model training is costly, and time and human resources are essential for robot trial-and-error with physical contact. We propose "Deep Predictive Learning," a motion learning concept that predicts the robot's sensorimotor dynamics, assuming imperfections in the prediction model. The predictive coding theory inspires this concept to solve the above problems. It is based on the fundamental strategy of predicting the near-future sensorimotor states of robots and online minimization of the prediction error between the real world and the model. Based on the acquired sensor information, the robot can adjust its behavior in real time, thereby tolerating the difference between the learning experience and reality. Additionally, the robot was expected to perform a wide range of tasks by combining the motion dynamics embedded in the model. This paper describes the proposed concept, its implementation, and examples of its applications in real robots. The code and documents are available at: https://ogata-lab.github.io/eipl-docs</description><subject>Deep learning</subject><subject>Errors</subject><subject>Free energy</subject><subject>Prediction models</subject><subject>Predictions</subject><subject>Robotics</subject><subject>Robots</subject><subject>Self-supervised learning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNi0ELgjAYhkcQJOV_GHQW1uZMu1rRoUCiu6h9ySS-rW0G_ftGROdOLzzv80xIxIVYJXnK-YzEzg2MMZ6tuZQiItUWwNDKwlV1Xj2BHqGxqLDf0JP2SuMP0FJjB8ZThc6oEND2FViP6tOddRv8zi3I9NbcHcTfnZPlfncpD4mx-jGC8_WgR4vhqnnOi0KmGcvFf9YbRsg-7g</recordid><startdate>20240314</startdate><enddate>20240314</enddate><creator>Suzuki, Kanata</creator><creator>Ito, Hiroshi</creator><creator>Yamada, Tatsuro</creator><creator>Kase, Kei</creator><creator>Ogata, Tetsuya</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240314</creationdate><title>Deep Predictive Learning: Motion Learning Concept inspired by Cognitive Robotics</title><author>Suzuki, Kanata ; Ito, Hiroshi ; Yamada, Tatsuro ; Kase, Kei ; Ogata, Tetsuya</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28299546083</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Deep learning</topic><topic>Errors</topic><topic>Free energy</topic><topic>Prediction models</topic><topic>Predictions</topic><topic>Robotics</topic><topic>Robots</topic><topic>Self-supervised learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Suzuki, Kanata</creatorcontrib><creatorcontrib>Ito, Hiroshi</creatorcontrib><creatorcontrib>Yamada, Tatsuro</creatorcontrib><creatorcontrib>Kase, Kei</creatorcontrib><creatorcontrib>Ogata, Tetsuya</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Suzuki, Kanata</au><au>Ito, Hiroshi</au><au>Yamada, Tatsuro</au><au>Kase, Kei</au><au>Ogata, Tetsuya</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Deep Predictive Learning: Motion Learning Concept inspired by Cognitive Robotics</atitle><jtitle>arXiv.org</jtitle><date>2024-03-14</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Bridging the gap between motion models and reality is crucial by using limited data to deploy robots in the real world. Deep learning is expected to be generalized to diverse situations while reducing feature design costs through end-to-end learning for environmental recognition and motion generation. However, data collection for model training is costly, and time and human resources are essential for robot trial-and-error with physical contact. We propose "Deep Predictive Learning," a motion learning concept that predicts the robot's sensorimotor dynamics, assuming imperfections in the prediction model. The predictive coding theory inspires this concept to solve the above problems. It is based on the fundamental strategy of predicting the near-future sensorimotor states of robots and online minimization of the prediction error between the real world and the model. Based on the acquired sensor information, the robot can adjust its behavior in real time, thereby tolerating the difference between the learning experience and reality. Additionally, the robot was expected to perform a wide range of tasks by combining the motion dynamics embedded in the model. This paper describes the proposed concept, its implementation, and examples of its applications in real robots. The code and documents are available at: https://ogata-lab.github.io/eipl-docs</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-03
issn 2331-8422
language eng
recordid cdi_proquest_journals_2829954608
source Freely Accessible Journals
subjects Deep learning
Errors
Free energy
Prediction models
Predictions
Robotics
Robots
Self-supervised learning
title Deep Predictive Learning: Motion Learning Concept inspired by Cognitive Robotics
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-17T01%3A57%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Deep%20Predictive%20Learning:%20Motion%20Learning%20Concept%20inspired%20by%20Cognitive%20Robotics&rft.jtitle=arXiv.org&rft.au=Suzuki,%20Kanata&rft.date=2024-03-14&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2829954608%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2829954608&rft_id=info:pmid/&rfr_iscdi=true