Toward Sim-to-Real Directional Semantic Grasping

We address the problem of directional semantic grasping, that is, grasping a specific object from a specific direction. We approach the problem using deep reinforcement learning via a double deep Q-network (DDQN) that learns to map downsampled RGB input images from a wrist-mounted camera to Q-values...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Iqbal, Shariq, Tremblay, Jonathan, To, Thang, Cheng, Jia, Leitch, Erik, Campbell, Andy, Leung, Kirby, McKay, Duncan, Birchfield, Stan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Iqbal, Shariq
Tremblay, Jonathan
To, Thang
Cheng, Jia
Leitch, Erik
Campbell, Andy
Leung, Kirby
McKay, Duncan
Birchfield, Stan
description We address the problem of directional semantic grasping, that is, grasping a specific object from a specific direction. We approach the problem using deep reinforcement learning via a double deep Q-network (DDQN) that learns to map downsampled RGB input images from a wrist-mounted camera to Q-values, which are then translated into Cartesian robot control commands via the cross-entropy method (CEM). The network is learned entirely on simulated data generated by a custom robot simulator that models both physical reality (contacts) and perceptual quality (high-quality rendering). The reality gap is bridged using domain randomization. The system is an example of end-to-end (mapping input monocular RGB images to output Cartesian motor commands) grasping of objects from multiple pre-defined object-centric orientations, such as from the side or top. We show promising results in both simulation and the real world, along with some challenges faced and the need for future research in this area.
doi_str_mv 10.48550/arxiv.1909.02075
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1909_02075</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1909_02075</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-64b842948e1c8a2289d12b14f5ab9d2146efb9f38bedd31acb647f15beb444823</originalsourceid><addsrcrecordid>eNotzktuwjAUhWFPGCBgAYzIBhxs5zqxhwgKrYSEBJlH17GNLOUlE7Vl9zza0flHRx8hS85SUFKyNcbf8J1yzXTKBCvklLCy_8Fok0to6djTs8Mm2YXo6jH03bMvrsVuDHVyiHgbQnedk4nH5uYW_zsj5f6j3H7S4-nwtd0cKeaFpDkYBUKDcrxWKITSlgvDwUs02goOufNG-0wZZ23GsTY5FJ5L4wwAKJHNyOrv9k2uhhhajPfqRa_e9OwB7QI9qA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Toward Sim-to-Real Directional Semantic Grasping</title><source>arXiv.org</source><creator>Iqbal, Shariq ; Tremblay, Jonathan ; To, Thang ; Cheng, Jia ; Leitch, Erik ; Campbell, Andy ; Leung, Kirby ; McKay, Duncan ; Birchfield, Stan</creator><creatorcontrib>Iqbal, Shariq ; Tremblay, Jonathan ; To, Thang ; Cheng, Jia ; Leitch, Erik ; Campbell, Andy ; Leung, Kirby ; McKay, Duncan ; Birchfield, Stan</creatorcontrib><description>We address the problem of directional semantic grasping, that is, grasping a specific object from a specific direction. We approach the problem using deep reinforcement learning via a double deep Q-network (DDQN) that learns to map downsampled RGB input images from a wrist-mounted camera to Q-values, which are then translated into Cartesian robot control commands via the cross-entropy method (CEM). The network is learned entirely on simulated data generated by a custom robot simulator that models both physical reality (contacts) and perceptual quality (high-quality rendering). The reality gap is bridged using domain randomization. The system is an example of end-to-end (mapping input monocular RGB images to output Cartesian motor commands) grasping of objects from multiple pre-defined object-centric orientations, such as from the side or top. We show promising results in both simulation and the real world, along with some challenges faced and the need for future research in this area.</description><identifier>DOI: 10.48550/arxiv.1909.02075</identifier><language>eng</language><subject>Computer Science - Robotics</subject><creationdate>2019-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1909.02075$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1909.02075$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Iqbal, Shariq</creatorcontrib><creatorcontrib>Tremblay, Jonathan</creatorcontrib><creatorcontrib>To, Thang</creatorcontrib><creatorcontrib>Cheng, Jia</creatorcontrib><creatorcontrib>Leitch, Erik</creatorcontrib><creatorcontrib>Campbell, Andy</creatorcontrib><creatorcontrib>Leung, Kirby</creatorcontrib><creatorcontrib>McKay, Duncan</creatorcontrib><creatorcontrib>Birchfield, Stan</creatorcontrib><title>Toward Sim-to-Real Directional Semantic Grasping</title><description>We address the problem of directional semantic grasping, that is, grasping a specific object from a specific direction. We approach the problem using deep reinforcement learning via a double deep Q-network (DDQN) that learns to map downsampled RGB input images from a wrist-mounted camera to Q-values, which are then translated into Cartesian robot control commands via the cross-entropy method (CEM). The network is learned entirely on simulated data generated by a custom robot simulator that models both physical reality (contacts) and perceptual quality (high-quality rendering). The reality gap is bridged using domain randomization. The system is an example of end-to-end (mapping input monocular RGB images to output Cartesian motor commands) grasping of objects from multiple pre-defined object-centric orientations, such as from the side or top. We show promising results in both simulation and the real world, along with some challenges faced and the need for future research in this area.</description><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzktuwjAUhWFPGCBgAYzIBhxs5zqxhwgKrYSEBJlH17GNLOUlE7Vl9zza0flHRx8hS85SUFKyNcbf8J1yzXTKBCvklLCy_8Fok0to6djTs8Mm2YXo6jH03bMvrsVuDHVyiHgbQnedk4nH5uYW_zsj5f6j3H7S4-nwtd0cKeaFpDkYBUKDcrxWKITSlgvDwUs02goOufNG-0wZZ23GsTY5FJ5L4wwAKJHNyOrv9k2uhhhajPfqRa_e9OwB7QI9qA</recordid><startdate>20190904</startdate><enddate>20190904</enddate><creator>Iqbal, Shariq</creator><creator>Tremblay, Jonathan</creator><creator>To, Thang</creator><creator>Cheng, Jia</creator><creator>Leitch, Erik</creator><creator>Campbell, Andy</creator><creator>Leung, Kirby</creator><creator>McKay, Duncan</creator><creator>Birchfield, Stan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190904</creationdate><title>Toward Sim-to-Real Directional Semantic Grasping</title><author>Iqbal, Shariq ; Tremblay, Jonathan ; To, Thang ; Cheng, Jia ; Leitch, Erik ; Campbell, Andy ; Leung, Kirby ; McKay, Duncan ; Birchfield, Stan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-64b842948e1c8a2289d12b14f5ab9d2146efb9f38bedd31acb647f15beb444823</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Iqbal, Shariq</creatorcontrib><creatorcontrib>Tremblay, Jonathan</creatorcontrib><creatorcontrib>To, Thang</creatorcontrib><creatorcontrib>Cheng, Jia</creatorcontrib><creatorcontrib>Leitch, Erik</creatorcontrib><creatorcontrib>Campbell, Andy</creatorcontrib><creatorcontrib>Leung, Kirby</creatorcontrib><creatorcontrib>McKay, Duncan</creatorcontrib><creatorcontrib>Birchfield, Stan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Iqbal, Shariq</au><au>Tremblay, Jonathan</au><au>To, Thang</au><au>Cheng, Jia</au><au>Leitch, Erik</au><au>Campbell, Andy</au><au>Leung, Kirby</au><au>McKay, Duncan</au><au>Birchfield, Stan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Toward Sim-to-Real Directional Semantic Grasping</atitle><date>2019-09-04</date><risdate>2019</risdate><abstract>We address the problem of directional semantic grasping, that is, grasping a specific object from a specific direction. We approach the problem using deep reinforcement learning via a double deep Q-network (DDQN) that learns to map downsampled RGB input images from a wrist-mounted camera to Q-values, which are then translated into Cartesian robot control commands via the cross-entropy method (CEM). The network is learned entirely on simulated data generated by a custom robot simulator that models both physical reality (contacts) and perceptual quality (high-quality rendering). The reality gap is bridged using domain randomization. The system is an example of end-to-end (mapping input monocular RGB images to output Cartesian motor commands) grasping of objects from multiple pre-defined object-centric orientations, such as from the side or top. We show promising results in both simulation and the real world, along with some challenges faced and the need for future research in this area.</abstract><doi>10.48550/arxiv.1909.02075</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1909.02075
ispartof
issn
language eng
recordid cdi_arxiv_primary_1909_02075
source arXiv.org
subjects Computer Science - Robotics
title Toward Sim-to-Real Directional Semantic Grasping
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T07%3A27%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Toward%20Sim-to-Real%20Directional%20Semantic%20Grasping&rft.au=Iqbal,%20Shariq&rft.date=2019-09-04&rft_id=info:doi/10.48550/arxiv.1909.02075&rft_dat=%3Carxiv_GOX%3E1909_02075%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true