Motion Reasoning for Goal-Based Imitation Learning

We address goal-based imitation learning, where the aim is to output the symbolic goal from a third-person video demonstration. This enables the robot to plan for execution and reproduce the same goal in a completely different environment. The key challenge is that the goal of a video demonstration...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2019-11
Hauptverfasser: De-An, Huang, Yu-Wei, Chao, Paxton, Chris, Deng, Xinke, Li, Fei-Fei, Niebles, Juan Carlos, Garg, Animesh, Fox, Dieter
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator De-An, Huang
Yu-Wei, Chao
Paxton, Chris
Deng, Xinke
Li, Fei-Fei
Niebles, Juan Carlos
Garg, Animesh
Fox, Dieter
description We address goal-based imitation learning, where the aim is to output the symbolic goal from a third-person video demonstration. This enables the robot to plan for execution and reproduce the same goal in a completely different environment. The key challenge is that the goal of a video demonstration is often ambiguous at the level of semantic actions. The human demonstrators might unintentionally achieve certain subgoals in the demonstrations with their actions. Our main contribution is to propose a motion reasoning framework that combines task and motion planning to disambiguate the true intention of the demonstrator in the video demonstration. This allows us to robustly recognize the goals that cannot be disambiguated by previous action-based approaches. We evaluate our approach by collecting a dataset of 96 video demonstrations in a mockup kitchen environment. We show that our motion reasoning plays an important role in recognizing the actual goal of the demonstrator and improves the success rate by over 20%. We further show that by using the automatically inferred goal from the video demonstration, our robot is able to reproduce the same task in a real kitchen environment.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2314659370</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2314659370</sourcerecordid><originalsourceid>FETCH-proquest_journals_23146593703</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQw8s0vyczPUwhKTSzOz8vMS1dIyy9ScM9PzNF1SixOTVHwzM0sSQQr8UlNLAKp4GFgTUvMKU7lhdLcDMpuriHOHroFRfmFpanFJfFZ-aVFeUCpeCNjQxMzU0tjcwNj4lQBALKAM3c</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2314659370</pqid></control><display><type>article</type><title>Motion Reasoning for Goal-Based Imitation Learning</title><source>Free E- Journals</source><creator>De-An, Huang ; Yu-Wei, Chao ; Paxton, Chris ; Deng, Xinke ; Li, Fei-Fei ; Niebles, Juan Carlos ; Garg, Animesh ; Fox, Dieter</creator><creatorcontrib>De-An, Huang ; Yu-Wei, Chao ; Paxton, Chris ; Deng, Xinke ; Li, Fei-Fei ; Niebles, Juan Carlos ; Garg, Animesh ; Fox, Dieter</creatorcontrib><description>We address goal-based imitation learning, where the aim is to output the symbolic goal from a third-person video demonstration. This enables the robot to plan for execution and reproduce the same goal in a completely different environment. The key challenge is that the goal of a video demonstration is often ambiguous at the level of semantic actions. The human demonstrators might unintentionally achieve certain subgoals in the demonstrations with their actions. Our main contribution is to propose a motion reasoning framework that combines task and motion planning to disambiguate the true intention of the demonstrator in the video demonstration. This allows us to robustly recognize the goals that cannot be disambiguated by previous action-based approaches. We evaluate our approach by collecting a dataset of 96 video demonstrations in a mockup kitchen environment. We show that our motion reasoning plays an important role in recognizing the actual goal of the demonstrator and improves the success rate by over 20%. We further show that by using the automatically inferred goal from the video demonstration, our robot is able to reproduce the same task in a real kitchen environment.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Automation ; Learning ; Motion planning ; Reasoning ; Robots</subject><ispartof>arXiv.org, 2019-11</ispartof><rights>2019. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>777,781</link.rule.ids></links><search><creatorcontrib>De-An, Huang</creatorcontrib><creatorcontrib>Yu-Wei, Chao</creatorcontrib><creatorcontrib>Paxton, Chris</creatorcontrib><creatorcontrib>Deng, Xinke</creatorcontrib><creatorcontrib>Li, Fei-Fei</creatorcontrib><creatorcontrib>Niebles, Juan Carlos</creatorcontrib><creatorcontrib>Garg, Animesh</creatorcontrib><creatorcontrib>Fox, Dieter</creatorcontrib><title>Motion Reasoning for Goal-Based Imitation Learning</title><title>arXiv.org</title><description>We address goal-based imitation learning, where the aim is to output the symbolic goal from a third-person video demonstration. This enables the robot to plan for execution and reproduce the same goal in a completely different environment. The key challenge is that the goal of a video demonstration is often ambiguous at the level of semantic actions. The human demonstrators might unintentionally achieve certain subgoals in the demonstrations with their actions. Our main contribution is to propose a motion reasoning framework that combines task and motion planning to disambiguate the true intention of the demonstrator in the video demonstration. This allows us to robustly recognize the goals that cannot be disambiguated by previous action-based approaches. We evaluate our approach by collecting a dataset of 96 video demonstrations in a mockup kitchen environment. We show that our motion reasoning plays an important role in recognizing the actual goal of the demonstrator and improves the success rate by over 20%. We further show that by using the automatically inferred goal from the video demonstration, our robot is able to reproduce the same task in a real kitchen environment.</description><subject>Automation</subject><subject>Learning</subject><subject>Motion planning</subject><subject>Reasoning</subject><subject>Robots</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQw8s0vyczPUwhKTSzOz8vMS1dIyy9ScM9PzNF1SixOTVHwzM0sSQQr8UlNLAKp4GFgTUvMKU7lhdLcDMpuriHOHroFRfmFpanFJfFZ-aVFeUCpeCNjQxMzU0tjcwNj4lQBALKAM3c</recordid><startdate>20191113</startdate><enddate>20191113</enddate><creator>De-An, Huang</creator><creator>Yu-Wei, Chao</creator><creator>Paxton, Chris</creator><creator>Deng, Xinke</creator><creator>Li, Fei-Fei</creator><creator>Niebles, Juan Carlos</creator><creator>Garg, Animesh</creator><creator>Fox, Dieter</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20191113</creationdate><title>Motion Reasoning for Goal-Based Imitation Learning</title><author>De-An, Huang ; Yu-Wei, Chao ; Paxton, Chris ; Deng, Xinke ; Li, Fei-Fei ; Niebles, Juan Carlos ; Garg, Animesh ; Fox, Dieter</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_23146593703</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Automation</topic><topic>Learning</topic><topic>Motion planning</topic><topic>Reasoning</topic><topic>Robots</topic><toplevel>online_resources</toplevel><creatorcontrib>De-An, Huang</creatorcontrib><creatorcontrib>Yu-Wei, Chao</creatorcontrib><creatorcontrib>Paxton, Chris</creatorcontrib><creatorcontrib>Deng, Xinke</creatorcontrib><creatorcontrib>Li, Fei-Fei</creatorcontrib><creatorcontrib>Niebles, Juan Carlos</creatorcontrib><creatorcontrib>Garg, Animesh</creatorcontrib><creatorcontrib>Fox, Dieter</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>De-An, Huang</au><au>Yu-Wei, Chao</au><au>Paxton, Chris</au><au>Deng, Xinke</au><au>Li, Fei-Fei</au><au>Niebles, Juan Carlos</au><au>Garg, Animesh</au><au>Fox, Dieter</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Motion Reasoning for Goal-Based Imitation Learning</atitle><jtitle>arXiv.org</jtitle><date>2019-11-13</date><risdate>2019</risdate><eissn>2331-8422</eissn><abstract>We address goal-based imitation learning, where the aim is to output the symbolic goal from a third-person video demonstration. This enables the robot to plan for execution and reproduce the same goal in a completely different environment. The key challenge is that the goal of a video demonstration is often ambiguous at the level of semantic actions. The human demonstrators might unintentionally achieve certain subgoals in the demonstrations with their actions. Our main contribution is to propose a motion reasoning framework that combines task and motion planning to disambiguate the true intention of the demonstrator in the video demonstration. This allows us to robustly recognize the goals that cannot be disambiguated by previous action-based approaches. We evaluate our approach by collecting a dataset of 96 video demonstrations in a mockup kitchen environment. We show that our motion reasoning plays an important role in recognizing the actual goal of the demonstrator and improves the success rate by over 20%. We further show that by using the automatically inferred goal from the video demonstration, our robot is able to reproduce the same task in a real kitchen environment.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2019-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_2314659370
source Free E- Journals
subjects Automation
Learning
Motion planning
Reasoning
Robots
title Motion Reasoning for Goal-Based Imitation Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T17%3A56%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Motion%20Reasoning%20for%20Goal-Based%20Imitation%20Learning&rft.jtitle=arXiv.org&rft.au=De-An,%20Huang&rft.date=2019-11-13&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2314659370%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2314659370&rft_id=info:pmid/&rfr_iscdi=true