Self-Supervised Visual Planning with Temporal Skip Connections
In order to autonomously learn wide repertoires of complex skills, robots must be able to learn from their own autonomously collected data, without human supervision. One learning signal that is always available for autonomously collected data is prediction: if a robot can learn to predict the futur...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2017-10 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Ebert, Frederik Finn, Chelsea Lee, Alex X Levine, Sergey |
description | In order to autonomously learn wide repertoires of complex skills, robots must be able to learn from their own autonomously collected data, without human supervision. One learning signal that is always available for autonomously collected data is prediction: if a robot can learn to predict the future, it can use this predictive model to take actions to produce desired outcomes, such as moving an object to a particular location. However, in complex open-world scenarios, designing a representation for prediction is difficult. In this work, we instead aim to enable self-supervised robotic learning through direct video prediction: instead of attempting to design a good representation, we directly predict what the robot will see next, and then use this model to achieve desired goals. A key challenge in video prediction for robotic manipulation is handling complex spatial arrangements such as occlusions. To that end, we introduce a video prediction model that can keep track of objects through occlusion by incorporating temporal skip-connections. Together with a novel planning criterion and action space formulation, we demonstrate that this model substantially outperforms prior work on video prediction-based control. Our results show manipulation of objects not seen during training, handling multiple objects, and pushing objects around obstructions. These results represent a significant advance in the range and complexity of skills that can be performed entirely with self-supervised robotic learning. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2076604742</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2076604742</sourcerecordid><originalsourceid>FETCH-proquest_journals_20766047423</originalsourceid><addsrcrecordid>eNqNirEKwjAUAIMgWLT_EHAuxCRtnVyK4ii0uJair5oaX2Jeo79vBz_A6eDuZiyRSm2yrZZywVKiQQghi1LmuUrYrgbbZ3X0EN6G4MrPhmJn-cl2iAZv_GPGO2_g6V2YdP0wnlcOES6jcUgrNu87S5D-uGTrw76pjpkP7hWBxnZwMeCUWinKohC61FL9d30By5M4dQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2076604742</pqid></control><display><type>article</type><title>Self-Supervised Visual Planning with Temporal Skip Connections</title><source>Free E- Journals</source><creator>Ebert, Frederik ; Finn, Chelsea ; Lee, Alex X ; Levine, Sergey</creator><creatorcontrib>Ebert, Frederik ; Finn, Chelsea ; Lee, Alex X ; Levine, Sergey</creatorcontrib><description>In order to autonomously learn wide repertoires of complex skills, robots must be able to learn from their own autonomously collected data, without human supervision. One learning signal that is always available for autonomously collected data is prediction: if a robot can learn to predict the future, it can use this predictive model to take actions to produce desired outcomes, such as moving an object to a particular location. However, in complex open-world scenarios, designing a representation for prediction is difficult. In this work, we instead aim to enable self-supervised robotic learning through direct video prediction: instead of attempting to design a good representation, we directly predict what the robot will see next, and then use this model to achieve desired goals. A key challenge in video prediction for robotic manipulation is handling complex spatial arrangements such as occlusions. To that end, we introduce a video prediction model that can keep track of objects through occlusion by incorporating temporal skip-connections. Together with a novel planning criterion and action space formulation, we demonstrate that this model substantially outperforms prior work on video prediction-based control. Our results show manipulation of objects not seen during training, handling multiple objects, and pushing objects around obstructions. These results represent a significant advance in the range and complexity of skills that can be performed entirely with self-supervised robotic learning.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Complexity ; Mathematical models ; Obstructions ; Occlusion ; Prediction models ; Representations ; Robot learning ; Robotics ; Robots ; Skills</subject><ispartof>arXiv.org, 2017-10</ispartof><rights>2017. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Ebert, Frederik</creatorcontrib><creatorcontrib>Finn, Chelsea</creatorcontrib><creatorcontrib>Lee, Alex X</creatorcontrib><creatorcontrib>Levine, Sergey</creatorcontrib><title>Self-Supervised Visual Planning with Temporal Skip Connections</title><title>arXiv.org</title><description>In order to autonomously learn wide repertoires of complex skills, robots must be able to learn from their own autonomously collected data, without human supervision. One learning signal that is always available for autonomously collected data is prediction: if a robot can learn to predict the future, it can use this predictive model to take actions to produce desired outcomes, such as moving an object to a particular location. However, in complex open-world scenarios, designing a representation for prediction is difficult. In this work, we instead aim to enable self-supervised robotic learning through direct video prediction: instead of attempting to design a good representation, we directly predict what the robot will see next, and then use this model to achieve desired goals. A key challenge in video prediction for robotic manipulation is handling complex spatial arrangements such as occlusions. To that end, we introduce a video prediction model that can keep track of objects through occlusion by incorporating temporal skip-connections. Together with a novel planning criterion and action space formulation, we demonstrate that this model substantially outperforms prior work on video prediction-based control. Our results show manipulation of objects not seen during training, handling multiple objects, and pushing objects around obstructions. These results represent a significant advance in the range and complexity of skills that can be performed entirely with self-supervised robotic learning.</description><subject>Complexity</subject><subject>Mathematical models</subject><subject>Obstructions</subject><subject>Occlusion</subject><subject>Prediction models</subject><subject>Representations</subject><subject>Robot learning</subject><subject>Robotics</subject><subject>Robots</subject><subject>Skills</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNirEKwjAUAIMgWLT_EHAuxCRtnVyK4ii0uJair5oaX2Jeo79vBz_A6eDuZiyRSm2yrZZywVKiQQghi1LmuUrYrgbbZ3X0EN6G4MrPhmJn-cl2iAZv_GPGO2_g6V2YdP0wnlcOES6jcUgrNu87S5D-uGTrw76pjpkP7hWBxnZwMeCUWinKohC61FL9d30By5M4dQ</recordid><startdate>20171015</startdate><enddate>20171015</enddate><creator>Ebert, Frederik</creator><creator>Finn, Chelsea</creator><creator>Lee, Alex X</creator><creator>Levine, Sergey</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20171015</creationdate><title>Self-Supervised Visual Planning with Temporal Skip Connections</title><author>Ebert, Frederik ; Finn, Chelsea ; Lee, Alex X ; Levine, Sergey</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_20766047423</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Complexity</topic><topic>Mathematical models</topic><topic>Obstructions</topic><topic>Occlusion</topic><topic>Prediction models</topic><topic>Representations</topic><topic>Robot learning</topic><topic>Robotics</topic><topic>Robots</topic><topic>Skills</topic><toplevel>online_resources</toplevel><creatorcontrib>Ebert, Frederik</creatorcontrib><creatorcontrib>Finn, Chelsea</creatorcontrib><creatorcontrib>Lee, Alex X</creatorcontrib><creatorcontrib>Levine, Sergey</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ebert, Frederik</au><au>Finn, Chelsea</au><au>Lee, Alex X</au><au>Levine, Sergey</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Self-Supervised Visual Planning with Temporal Skip Connections</atitle><jtitle>arXiv.org</jtitle><date>2017-10-15</date><risdate>2017</risdate><eissn>2331-8422</eissn><abstract>In order to autonomously learn wide repertoires of complex skills, robots must be able to learn from their own autonomously collected data, without human supervision. One learning signal that is always available for autonomously collected data is prediction: if a robot can learn to predict the future, it can use this predictive model to take actions to produce desired outcomes, such as moving an object to a particular location. However, in complex open-world scenarios, designing a representation for prediction is difficult. In this work, we instead aim to enable self-supervised robotic learning through direct video prediction: instead of attempting to design a good representation, we directly predict what the robot will see next, and then use this model to achieve desired goals. A key challenge in video prediction for robotic manipulation is handling complex spatial arrangements such as occlusions. To that end, we introduce a video prediction model that can keep track of objects through occlusion by incorporating temporal skip-connections. Together with a novel planning criterion and action space formulation, we demonstrate that this model substantially outperforms prior work on video prediction-based control. Our results show manipulation of objects not seen during training, handling multiple objects, and pushing objects around obstructions. These results represent a significant advance in the range and complexity of skills that can be performed entirely with self-supervised robotic learning.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2017-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2076604742 |
source | Free E- Journals |
subjects | Complexity Mathematical models Obstructions Occlusion Prediction models Representations Robot learning Robotics Robots Skills |
title | Self-Supervised Visual Planning with Temporal Skip Connections |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T15%3A16%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Self-Supervised%20Visual%20Planning%20with%20Temporal%20Skip%20Connections&rft.jtitle=arXiv.org&rft.au=Ebert,%20Frederik&rft.date=2017-10-15&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2076604742%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2076604742&rft_id=info:pmid/&rfr_iscdi=true |