Towards Hierarchical Task Decomposition using Deep Reinforcement Learning for Pick and Place Subtasks
Deep Reinforcement Learning (DRL) is emerging as a promising approach to generate adaptive behaviors for robotic platforms. However, a major drawback of using DRL is the data-hungry training regime that requires millions of trial and error attempts, which is impractical when running experiments on r...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2021-10 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Marzari, Luca Pore, Ameya Dall'Alba, Diego Aragon-Camarasa, Gerardo Farinelli, Alessandro Fiorini, Paolo |
description | Deep Reinforcement Learning (DRL) is emerging as a promising approach to generate adaptive behaviors for robotic platforms. However, a major drawback of using DRL is the data-hungry training regime that requires millions of trial and error attempts, which is impractical when running experiments on robotic systems. Learning from Demonstrations (LfD) has been introduced to solve this issue by cloning the behavior of expert demonstrations. However, LfD requires a large number of demonstrations that are difficult to be acquired since dedicated complex setups are required. To overcome these limitations, we propose a multi-subtask reinforcement learning methodology where complex pick and place tasks can be decomposed into low-level subtasks. These subtasks are parametrized as expert networks and learned via DRL methods. Trained subtasks are then combined by a high-level choreographer to accomplish the intended pick and place task considering different initial configurations. As a testbed, we use a pick and place robotic simulator to demonstrate our methodology and show that our method outperforms a benchmark methodology based on LfD in terms of sample-efficiency. We transfer the learned policy to the real robotic system and demonstrate robust grasping using various geometric-shaped objects. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2487645054</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2487645054</sourcerecordid><originalsourceid>FETCH-proquest_journals_24876450543</originalsourceid><addsrcrecordid>eNqNjMkKwjAURYMgWLT_8MB1oabpsHegCxdFu5eYvmo6JDWvxd-3gh_g6sI5h7tgHo-iXZAJzlfMJ2rCMORJyuM48hiW9i1dRZBrdNKpp1ayg1JSCwdUth8s6VFbAxNp85gZDnBBbWrrFPZoRjijdObrZgSFVi1IU0HRSYVwne7jfEUbtqxlR-j_ds22p2O5z4PB2deENN4aOzkzqxsXWZqIOIxF9F_1AXUKRrI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2487645054</pqid></control><display><type>article</type><title>Towards Hierarchical Task Decomposition using Deep Reinforcement Learning for Pick and Place Subtasks</title><source>Free E- Journals</source><creator>Marzari, Luca ; Pore, Ameya ; Dall'Alba, Diego ; Aragon-Camarasa, Gerardo ; Farinelli, Alessandro ; Fiorini, Paolo</creator><creatorcontrib>Marzari, Luca ; Pore, Ameya ; Dall'Alba, Diego ; Aragon-Camarasa, Gerardo ; Farinelli, Alessandro ; Fiorini, Paolo</creatorcontrib><description>Deep Reinforcement Learning (DRL) is emerging as a promising approach to generate adaptive behaviors for robotic platforms. However, a major drawback of using DRL is the data-hungry training regime that requires millions of trial and error attempts, which is impractical when running experiments on robotic systems. Learning from Demonstrations (LfD) has been introduced to solve this issue by cloning the behavior of expert demonstrations. However, LfD requires a large number of demonstrations that are difficult to be acquired since dedicated complex setups are required. To overcome these limitations, we propose a multi-subtask reinforcement learning methodology where complex pick and place tasks can be decomposed into low-level subtasks. These subtasks are parametrized as expert networks and learned via DRL methods. Trained subtasks are then combined by a high-level choreographer to accomplish the intended pick and place task considering different initial configurations. As a testbed, we use a pick and place robotic simulator to demonstrate our methodology and show that our method outperforms a benchmark methodology based on LfD in terms of sample-efficiency. We transfer the learned policy to the real robotic system and demonstrate robust grasping using various geometric-shaped objects.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Automation ; Decomposition ; Deep learning ; Locomotion ; Pick and place tasks ; Robotics ; Task complexity</subject><ispartof>arXiv.org, 2021-10</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Marzari, Luca</creatorcontrib><creatorcontrib>Pore, Ameya</creatorcontrib><creatorcontrib>Dall'Alba, Diego</creatorcontrib><creatorcontrib>Aragon-Camarasa, Gerardo</creatorcontrib><creatorcontrib>Farinelli, Alessandro</creatorcontrib><creatorcontrib>Fiorini, Paolo</creatorcontrib><title>Towards Hierarchical Task Decomposition using Deep Reinforcement Learning for Pick and Place Subtasks</title><title>arXiv.org</title><description>Deep Reinforcement Learning (DRL) is emerging as a promising approach to generate adaptive behaviors for robotic platforms. However, a major drawback of using DRL is the data-hungry training regime that requires millions of trial and error attempts, which is impractical when running experiments on robotic systems. Learning from Demonstrations (LfD) has been introduced to solve this issue by cloning the behavior of expert demonstrations. However, LfD requires a large number of demonstrations that are difficult to be acquired since dedicated complex setups are required. To overcome these limitations, we propose a multi-subtask reinforcement learning methodology where complex pick and place tasks can be decomposed into low-level subtasks. These subtasks are parametrized as expert networks and learned via DRL methods. Trained subtasks are then combined by a high-level choreographer to accomplish the intended pick and place task considering different initial configurations. As a testbed, we use a pick and place robotic simulator to demonstrate our methodology and show that our method outperforms a benchmark methodology based on LfD in terms of sample-efficiency. We transfer the learned policy to the real robotic system and demonstrate robust grasping using various geometric-shaped objects.</description><subject>Automation</subject><subject>Decomposition</subject><subject>Deep learning</subject><subject>Locomotion</subject><subject>Pick and place tasks</subject><subject>Robotics</subject><subject>Task complexity</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNjMkKwjAURYMgWLT_8MB1oabpsHegCxdFu5eYvmo6JDWvxd-3gh_g6sI5h7tgHo-iXZAJzlfMJ2rCMORJyuM48hiW9i1dRZBrdNKpp1ayg1JSCwdUth8s6VFbAxNp85gZDnBBbWrrFPZoRjijdObrZgSFVi1IU0HRSYVwne7jfEUbtqxlR-j_ds22p2O5z4PB2deENN4aOzkzqxsXWZqIOIxF9F_1AXUKRrI</recordid><startdate>20211019</startdate><enddate>20211019</enddate><creator>Marzari, Luca</creator><creator>Pore, Ameya</creator><creator>Dall'Alba, Diego</creator><creator>Aragon-Camarasa, Gerardo</creator><creator>Farinelli, Alessandro</creator><creator>Fiorini, Paolo</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20211019</creationdate><title>Towards Hierarchical Task Decomposition using Deep Reinforcement Learning for Pick and Place Subtasks</title><author>Marzari, Luca ; Pore, Ameya ; Dall'Alba, Diego ; Aragon-Camarasa, Gerardo ; Farinelli, Alessandro ; Fiorini, Paolo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24876450543</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Automation</topic><topic>Decomposition</topic><topic>Deep learning</topic><topic>Locomotion</topic><topic>Pick and place tasks</topic><topic>Robotics</topic><topic>Task complexity</topic><toplevel>online_resources</toplevel><creatorcontrib>Marzari, Luca</creatorcontrib><creatorcontrib>Pore, Ameya</creatorcontrib><creatorcontrib>Dall'Alba, Diego</creatorcontrib><creatorcontrib>Aragon-Camarasa, Gerardo</creatorcontrib><creatorcontrib>Farinelli, Alessandro</creatorcontrib><creatorcontrib>Fiorini, Paolo</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Marzari, Luca</au><au>Pore, Ameya</au><au>Dall'Alba, Diego</au><au>Aragon-Camarasa, Gerardo</au><au>Farinelli, Alessandro</au><au>Fiorini, Paolo</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Towards Hierarchical Task Decomposition using Deep Reinforcement Learning for Pick and Place Subtasks</atitle><jtitle>arXiv.org</jtitle><date>2021-10-19</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Deep Reinforcement Learning (DRL) is emerging as a promising approach to generate adaptive behaviors for robotic platforms. However, a major drawback of using DRL is the data-hungry training regime that requires millions of trial and error attempts, which is impractical when running experiments on robotic systems. Learning from Demonstrations (LfD) has been introduced to solve this issue by cloning the behavior of expert demonstrations. However, LfD requires a large number of demonstrations that are difficult to be acquired since dedicated complex setups are required. To overcome these limitations, we propose a multi-subtask reinforcement learning methodology where complex pick and place tasks can be decomposed into low-level subtasks. These subtasks are parametrized as expert networks and learned via DRL methods. Trained subtasks are then combined by a high-level choreographer to accomplish the intended pick and place task considering different initial configurations. As a testbed, we use a pick and place robotic simulator to demonstrate our methodology and show that our method outperforms a benchmark methodology based on LfD in terms of sample-efficiency. We transfer the learned policy to the real robotic system and demonstrate robust grasping using various geometric-shaped objects.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2487645054 |
source | Free E- Journals |
subjects | Automation Decomposition Deep learning Locomotion Pick and place tasks Robotics Task complexity |
title | Towards Hierarchical Task Decomposition using Deep Reinforcement Learning for Pick and Place Subtasks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T17%3A19%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Towards%20Hierarchical%20Task%20Decomposition%20using%20Deep%20Reinforcement%20Learning%20for%20Pick%20and%20Place%20Subtasks&rft.jtitle=arXiv.org&rft.au=Marzari,%20Luca&rft.date=2021-10-19&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2487645054%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2487645054&rft_id=info:pmid/&rfr_iscdi=true |