Reinforcement Learning for Vision-based Object Manipulation with Non-parametric Policy and Action Primitives

The object manipulation is a crucial ability for a service robot, but it is hard to solve with reinforcement learning due to some reasons such as sample efficiency. In this paper, to tackle this object manipulation, we propose a novel framework, AP-NPQL (Non-Parametric Q Learning with Action Primiti...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-06
Hauptverfasser: Son, Dongwon, Kim, Myungsin, Sim, Jaecheol, Shin, Wonsik
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Son, Dongwon
Kim, Myungsin
Sim, Jaecheol
Shin, Wonsik
description The object manipulation is a crucial ability for a service robot, but it is hard to solve with reinforcement learning due to some reasons such as sample efficiency. In this paper, to tackle this object manipulation, we propose a novel framework, AP-NPQL (Non-Parametric Q Learning with Action Primitives), that can efficiently solve the object manipulation with visual input and sparse reward, by utilizing a non-parametric policy for reinforcement learning and appropriate behavior prior for the object manipulation. We evaluate the efficiency and the performance of the proposed AP-NPQL for four object manipulation tasks on simulation (pushing plate, stacking box, flipping cup, and picking and placing plate), and it turns out that our AP-NPQL outperforms the state-of-the-art algorithms based on parametric policy and behavior prior in terms of learning time and task success rate. We also successfully transfer and validate the learned policy of the plate pick-and-place task to the real robot in a sim-to-real manner.
doi_str_mv 10.48550/arxiv.2206.05671
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2206_05671</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2676394397</sourcerecordid><originalsourceid>FETCH-LOGICAL-a527-407c72b98f4133db5e62d9e639fa5050676f5583afee05c1551deb3bdf5c350e3</originalsourceid><addsrcrecordid>eNotkF1LwzAUhoMgOOZ-gFcGvO5Mk56kvRzDj8F0Q4a3JU1PNaNLa5pN9--Nm1cHznl438NDyE3KplkOwO61_7GHKedMThlIlV6QERciTfKM8ysyGYYtY4xLxQHEiLRvaF3TeYM7dIEuUXtn3QeNK_puB9u5pNID1nRVbdEE-qKd7fetDvFCv234pK8R6bXXOwzeGrruWmuOVLuazsyJWnu7s8EecLgml41uB5z8zzHZPD5s5s_JcvW0mM-WiQaukowpo3hV5E2WClFXgJLXBUpRNBoYMKlkA5AL3SAyMClAWmMlqroBI4ChGJPbc-xJRdnHfu2P5Z-S8qQkEndnovfd1x6HUG67vXfxpzKaiU2ZKJT4BecBZWI</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2676394397</pqid></control><display><type>article</type><title>Reinforcement Learning for Vision-based Object Manipulation with Non-parametric Policy and Action Primitives</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Son, Dongwon ; Kim, Myungsin ; Sim, Jaecheol ; Shin, Wonsik</creator><creatorcontrib>Son, Dongwon ; Kim, Myungsin ; Sim, Jaecheol ; Shin, Wonsik</creatorcontrib><description>The object manipulation is a crucial ability for a service robot, but it is hard to solve with reinforcement learning due to some reasons such as sample efficiency. In this paper, to tackle this object manipulation, we propose a novel framework, AP-NPQL (Non-Parametric Q Learning with Action Primitives), that can efficiently solve the object manipulation with visual input and sparse reward, by utilizing a non-parametric policy for reinforcement learning and appropriate behavior prior for the object manipulation. We evaluate the efficiency and the performance of the proposed AP-NPQL for four object manipulation tasks on simulation (pushing plate, stacking box, flipping cup, and picking and placing plate), and it turns out that our AP-NPQL outperforms the state-of-the-art algorithms based on parametric policy and behavior prior in terms of learning time and task success rate. We also successfully transfer and validate the learned policy of the plate pick-and-place task to the real robot in a sim-to-real manner.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2206.05671</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Computer Science - Robotics ; Machine learning ; Pick and place tasks ; Service robots</subject><ispartof>arXiv.org, 2022-06</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.1109/IROS51168.2021.9636563$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2206.05671$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Son, Dongwon</creatorcontrib><creatorcontrib>Kim, Myungsin</creatorcontrib><creatorcontrib>Sim, Jaecheol</creatorcontrib><creatorcontrib>Shin, Wonsik</creatorcontrib><title>Reinforcement Learning for Vision-based Object Manipulation with Non-parametric Policy and Action Primitives</title><title>arXiv.org</title><description>The object manipulation is a crucial ability for a service robot, but it is hard to solve with reinforcement learning due to some reasons such as sample efficiency. In this paper, to tackle this object manipulation, we propose a novel framework, AP-NPQL (Non-Parametric Q Learning with Action Primitives), that can efficiently solve the object manipulation with visual input and sparse reward, by utilizing a non-parametric policy for reinforcement learning and appropriate behavior prior for the object manipulation. We evaluate the efficiency and the performance of the proposed AP-NPQL for four object manipulation tasks on simulation (pushing plate, stacking box, flipping cup, and picking and placing plate), and it turns out that our AP-NPQL outperforms the state-of-the-art algorithms based on parametric policy and behavior prior in terms of learning time and task success rate. We also successfully transfer and validate the learned policy of the plate pick-and-place task to the real robot in a sim-to-real manner.</description><subject>Algorithms</subject><subject>Computer Science - Robotics</subject><subject>Machine learning</subject><subject>Pick and place tasks</subject><subject>Service robots</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotkF1LwzAUhoMgOOZ-gFcGvO5Mk56kvRzDj8F0Q4a3JU1PNaNLa5pN9--Nm1cHznl438NDyE3KplkOwO61_7GHKedMThlIlV6QERciTfKM8ysyGYYtY4xLxQHEiLRvaF3TeYM7dIEuUXtn3QeNK_puB9u5pNID1nRVbdEE-qKd7fetDvFCv234pK8R6bXXOwzeGrruWmuOVLuazsyJWnu7s8EecLgml41uB5z8zzHZPD5s5s_JcvW0mM-WiQaukowpo3hV5E2WClFXgJLXBUpRNBoYMKlkA5AL3SAyMClAWmMlqroBI4ChGJPbc-xJRdnHfu2P5Z-S8qQkEndnovfd1x6HUG67vXfxpzKaiU2ZKJT4BecBZWI</recordid><startdate>20220612</startdate><enddate>20220612</enddate><creator>Son, Dongwon</creator><creator>Kim, Myungsin</creator><creator>Sim, Jaecheol</creator><creator>Shin, Wonsik</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220612</creationdate><title>Reinforcement Learning for Vision-based Object Manipulation with Non-parametric Policy and Action Primitives</title><author>Son, Dongwon ; Kim, Myungsin ; Sim, Jaecheol ; Shin, Wonsik</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a527-407c72b98f4133db5e62d9e639fa5050676f5583afee05c1551deb3bdf5c350e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Computer Science - Robotics</topic><topic>Machine learning</topic><topic>Pick and place tasks</topic><topic>Service robots</topic><toplevel>online_resources</toplevel><creatorcontrib>Son, Dongwon</creatorcontrib><creatorcontrib>Kim, Myungsin</creatorcontrib><creatorcontrib>Sim, Jaecheol</creatorcontrib><creatorcontrib>Shin, Wonsik</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Son, Dongwon</au><au>Kim, Myungsin</au><au>Sim, Jaecheol</au><au>Shin, Wonsik</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Reinforcement Learning for Vision-based Object Manipulation with Non-parametric Policy and Action Primitives</atitle><jtitle>arXiv.org</jtitle><date>2022-06-12</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>The object manipulation is a crucial ability for a service robot, but it is hard to solve with reinforcement learning due to some reasons such as sample efficiency. In this paper, to tackle this object manipulation, we propose a novel framework, AP-NPQL (Non-Parametric Q Learning with Action Primitives), that can efficiently solve the object manipulation with visual input and sparse reward, by utilizing a non-parametric policy for reinforcement learning and appropriate behavior prior for the object manipulation. We evaluate the efficiency and the performance of the proposed AP-NPQL for four object manipulation tasks on simulation (pushing plate, stacking box, flipping cup, and picking and placing plate), and it turns out that our AP-NPQL outperforms the state-of-the-art algorithms based on parametric policy and behavior prior in terms of learning time and task success rate. We also successfully transfer and validate the learned policy of the plate pick-and-place task to the real robot in a sim-to-real manner.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2206.05671</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-06
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2206_05671
source arXiv.org; Free E- Journals
subjects Algorithms
Computer Science - Robotics
Machine learning
Pick and place tasks
Service robots
title Reinforcement Learning for Vision-based Object Manipulation with Non-parametric Policy and Action Primitives
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T19%3A51%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Reinforcement%20Learning%20for%20Vision-based%20Object%20Manipulation%20with%20Non-parametric%20Policy%20and%20Action%20Primitives&rft.jtitle=arXiv.org&rft.au=Son,%20Dongwon&rft.date=2022-06-12&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2206.05671&rft_dat=%3Cproquest_arxiv%3E2676394397%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2676394397&rft_id=info:pmid/&rfr_iscdi=true