Grasping in the Wild: Learning 6DoF Closed-Loop Grasping From Low-Cost Demonstrations
Intelligent manipulation benefits from the capacity to flexibly control an end-effector with high degrees of freedom (DoF) and dynamically react to the environment. However, due to the challenges of collecting effective training data and learning efficiently, most grasping algorithms today are limit...
Gespeichert in:
Veröffentlicht in: | IEEE robotics and automation letters 2020-07, Vol.5 (3), p.4978-4985 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 4985 |
---|---|
container_issue | 3 |
container_start_page | 4978 |
container_title | IEEE robotics and automation letters |
container_volume | 5 |
creator | Song, Shuran Zeng, Andy Lee, Johnny Funkhouser, Thomas |
description | Intelligent manipulation benefits from the capacity to flexibly control an end-effector with high degrees of freedom (DoF) and dynamically react to the environment. However, due to the challenges of collecting effective training data and learning efficiently, most grasping algorithms today are limited to top-down movements and open-loop execution. In this work, we propose a new low-cost hardware interface for collecting grasping demonstrations by people in diverse environments. This data makes it possible to train a robust end-to-end 6DoF closed-loop grasping model with reinforcement learning that transfers to real robots. A key aspect of our grasping model is that it uses "action-view" based rendering to simulate future states with respect to different possible actions. By evaluating these states using a learned value function (e.g., Q-function), our method is able to better select corresponding actions that maximize total rewards (i.e., grasping success). Our final grasping system is able to achieve reliable 6DoF closed-loop grasping of novel objects across various scene configurations, as well as in dynamic scenes with moving objects. |
doi_str_mv | 10.1109/LRA.2020.3004787 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2422024651</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9126187</ieee_id><sourcerecordid>2422024651</sourcerecordid><originalsourceid>FETCH-LOGICAL-c357t-8b271bbfac22fca3d11203b69bd2eee7c5d3617885f17385cd5fe4da338de01a3</originalsourceid><addsrcrecordid>eNpNkNFLwzAQxoMoOObeBV8CPrfmkiZpfRudm0JBEIePIW1S7diamnSI_70ZG8On77j7vrvjh9AtkBSAFA_V2zylhJKUEZLJXF6gCWVSJkwKcfmvvkazEDaEEOBUsoJP0HrldRi6_hN3PR6_LP7otuYRV1b7_tAVC7fE5dYFa5LKuQGf_UvvdrhyP0npwogXduf6MHo9dlFv0FWrt8HOTjpF6-XTe_mcVK-rl3JeJQ3jckzymkqo61Y3lLaNZgaAElaLojbUWisbbpgAmee8Bcly3hje2sxoxnJjCWg2RffHvYN333sbRrVxe9_Hk4pmNBLJBIfoIkdX410I3rZq8N1O-18FRB34qchPHfipE78YuTtGuvjH2V4AFRCnf7iGarw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2422024651</pqid></control><display><type>article</type><title>Grasping in the Wild: Learning 6DoF Closed-Loop Grasping From Low-Cost Demonstrations</title><source>IEEE Electronic Library (IEL)</source><creator>Song, Shuran ; Zeng, Andy ; Lee, Johnny ; Funkhouser, Thomas</creator><creatorcontrib>Song, Shuran ; Zeng, Andy ; Lee, Johnny ; Funkhouser, Thomas</creatorcontrib><description>Intelligent manipulation benefits from the capacity to flexibly control an end-effector with high degrees of freedom (DoF) and dynamically react to the environment. However, due to the challenges of collecting effective training data and learning efficiently, most grasping algorithms today are limited to top-down movements and open-loop execution. In this work, we propose a new low-cost hardware interface for collecting grasping demonstrations by people in diverse environments. This data makes it possible to train a robust end-to-end 6DoF closed-loop grasping model with reinforcement learning that transfers to real robots. A key aspect of our grasping model is that it uses "action-view" based rendering to simulate future states with respect to different possible actions. By evaluating these states using a learned value function (e.g., Q-function), our method is able to better select corresponding actions that maximize total rewards (i.e., grasping success). Our final grasping system is able to achieve reliable 6DoF closed-loop grasping of novel objects across various scene configurations, as well as in dynamic scenes with moving objects.</description><identifier>ISSN: 2377-3766</identifier><identifier>EISSN: 2377-3766</identifier><identifier>DOI: 10.1109/LRA.2020.3004787</identifier><identifier>CODEN: IRALC6</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Cameras ; Computer simulation ; deep learning for visual perception ; Deep learning in grasping and manipulation ; Degrees of freedom ; Grasping ; Grasping (robotics) ; Grippers ; Low cost ; Machine learning ; Robots ; Task analysis ; Visualization</subject><ispartof>IEEE robotics and automation letters, 2020-07, Vol.5 (3), p.4978-4985</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c357t-8b271bbfac22fca3d11203b69bd2eee7c5d3617885f17385cd5fe4da338de01a3</citedby><cites>FETCH-LOGICAL-c357t-8b271bbfac22fca3d11203b69bd2eee7c5d3617885f17385cd5fe4da338de01a3</cites><orcidid>0000-0002-4319-2159 ; 0000-0002-8768-7356</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9126187$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9126187$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Song, Shuran</creatorcontrib><creatorcontrib>Zeng, Andy</creatorcontrib><creatorcontrib>Lee, Johnny</creatorcontrib><creatorcontrib>Funkhouser, Thomas</creatorcontrib><title>Grasping in the Wild: Learning 6DoF Closed-Loop Grasping From Low-Cost Demonstrations</title><title>IEEE robotics and automation letters</title><addtitle>LRA</addtitle><description>Intelligent manipulation benefits from the capacity to flexibly control an end-effector with high degrees of freedom (DoF) and dynamically react to the environment. However, due to the challenges of collecting effective training data and learning efficiently, most grasping algorithms today are limited to top-down movements and open-loop execution. In this work, we propose a new low-cost hardware interface for collecting grasping demonstrations by people in diverse environments. This data makes it possible to train a robust end-to-end 6DoF closed-loop grasping model with reinforcement learning that transfers to real robots. A key aspect of our grasping model is that it uses "action-view" based rendering to simulate future states with respect to different possible actions. By evaluating these states using a learned value function (e.g., Q-function), our method is able to better select corresponding actions that maximize total rewards (i.e., grasping success). Our final grasping system is able to achieve reliable 6DoF closed-loop grasping of novel objects across various scene configurations, as well as in dynamic scenes with moving objects.</description><subject>Algorithms</subject><subject>Cameras</subject><subject>Computer simulation</subject><subject>deep learning for visual perception</subject><subject>Deep learning in grasping and manipulation</subject><subject>Degrees of freedom</subject><subject>Grasping</subject><subject>Grasping (robotics)</subject><subject>Grippers</subject><subject>Low cost</subject><subject>Machine learning</subject><subject>Robots</subject><subject>Task analysis</subject><subject>Visualization</subject><issn>2377-3766</issn><issn>2377-3766</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkNFLwzAQxoMoOObeBV8CPrfmkiZpfRudm0JBEIePIW1S7diamnSI_70ZG8On77j7vrvjh9AtkBSAFA_V2zylhJKUEZLJXF6gCWVSJkwKcfmvvkazEDaEEOBUsoJP0HrldRi6_hN3PR6_LP7otuYRV1b7_tAVC7fE5dYFa5LKuQGf_UvvdrhyP0npwogXduf6MHo9dlFv0FWrt8HOTjpF6-XTe_mcVK-rl3JeJQ3jckzymkqo61Y3lLaNZgaAElaLojbUWisbbpgAmee8Bcly3hje2sxoxnJjCWg2RffHvYN333sbRrVxe9_Hk4pmNBLJBIfoIkdX410I3rZq8N1O-18FRB34qchPHfipE78YuTtGuvjH2V4AFRCnf7iGarw</recordid><startdate>20200701</startdate><enddate>20200701</enddate><creator>Song, Shuran</creator><creator>Zeng, Andy</creator><creator>Lee, Johnny</creator><creator>Funkhouser, Thomas</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-4319-2159</orcidid><orcidid>https://orcid.org/0000-0002-8768-7356</orcidid></search><sort><creationdate>20200701</creationdate><title>Grasping in the Wild: Learning 6DoF Closed-Loop Grasping From Low-Cost Demonstrations</title><author>Song, Shuran ; Zeng, Andy ; Lee, Johnny ; Funkhouser, Thomas</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c357t-8b271bbfac22fca3d11203b69bd2eee7c5d3617885f17385cd5fe4da338de01a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>Cameras</topic><topic>Computer simulation</topic><topic>deep learning for visual perception</topic><topic>Deep learning in grasping and manipulation</topic><topic>Degrees of freedom</topic><topic>Grasping</topic><topic>Grasping (robotics)</topic><topic>Grippers</topic><topic>Low cost</topic><topic>Machine learning</topic><topic>Robots</topic><topic>Task analysis</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Song, Shuran</creatorcontrib><creatorcontrib>Zeng, Andy</creatorcontrib><creatorcontrib>Lee, Johnny</creatorcontrib><creatorcontrib>Funkhouser, Thomas</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE robotics and automation letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Song, Shuran</au><au>Zeng, Andy</au><au>Lee, Johnny</au><au>Funkhouser, Thomas</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Grasping in the Wild: Learning 6DoF Closed-Loop Grasping From Low-Cost Demonstrations</atitle><jtitle>IEEE robotics and automation letters</jtitle><stitle>LRA</stitle><date>2020-07-01</date><risdate>2020</risdate><volume>5</volume><issue>3</issue><spage>4978</spage><epage>4985</epage><pages>4978-4985</pages><issn>2377-3766</issn><eissn>2377-3766</eissn><coden>IRALC6</coden><abstract>Intelligent manipulation benefits from the capacity to flexibly control an end-effector with high degrees of freedom (DoF) and dynamically react to the environment. However, due to the challenges of collecting effective training data and learning efficiently, most grasping algorithms today are limited to top-down movements and open-loop execution. In this work, we propose a new low-cost hardware interface for collecting grasping demonstrations by people in diverse environments. This data makes it possible to train a robust end-to-end 6DoF closed-loop grasping model with reinforcement learning that transfers to real robots. A key aspect of our grasping model is that it uses "action-view" based rendering to simulate future states with respect to different possible actions. By evaluating these states using a learned value function (e.g., Q-function), our method is able to better select corresponding actions that maximize total rewards (i.e., grasping success). Our final grasping system is able to achieve reliable 6DoF closed-loop grasping of novel objects across various scene configurations, as well as in dynamic scenes with moving objects.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/LRA.2020.3004787</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0002-4319-2159</orcidid><orcidid>https://orcid.org/0000-0002-8768-7356</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2377-3766 |
ispartof | IEEE robotics and automation letters, 2020-07, Vol.5 (3), p.4978-4985 |
issn | 2377-3766 2377-3766 |
language | eng |
recordid | cdi_proquest_journals_2422024651 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Cameras Computer simulation deep learning for visual perception Deep learning in grasping and manipulation Degrees of freedom Grasping Grasping (robotics) Grippers Low cost Machine learning Robots Task analysis Visualization |
title | Grasping in the Wild: Learning 6DoF Closed-Loop Grasping From Low-Cost Demonstrations |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T23%3A01%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Grasping%20in%20the%20Wild:%20Learning%206DoF%20Closed-Loop%20Grasping%20From%20Low-Cost%20Demonstrations&rft.jtitle=IEEE%20robotics%20and%20automation%20letters&rft.au=Song,%20Shuran&rft.date=2020-07-01&rft.volume=5&rft.issue=3&rft.spage=4978&rft.epage=4985&rft.pages=4978-4985&rft.issn=2377-3766&rft.eissn=2377-3766&rft.coden=IRALC6&rft_id=info:doi/10.1109/LRA.2020.3004787&rft_dat=%3Cproquest_RIE%3E2422024651%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2422024651&rft_id=info:pmid/&rft_ieee_id=9126187&rfr_iscdi=true |