Object-Independent Human-to-Robot Handovers Using Real Time Robotic Vision
We present an approach for safe, and object-independent human-to-robot handovers using real time robotic vision, and manipulation. We aim for general applicability with a generic object detector, a fast grasp selection algorithm, and by using a single gripper-mounted RGB-D camera, hence not relying...
Gespeichert in:
Veröffentlicht in: | IEEE robotics and automation letters 2021-01, Vol.6 (1), p.17-23 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 23 |
---|---|
container_issue | 1 |
container_start_page | 17 |
container_title | IEEE robotics and automation letters |
container_volume | 6 |
creator | Rosenberger, Patrick Cosgun, Akansel Newbury, Rhys Kwan, Jun Ortenzi, Valerio Corke, Peter Grafinger, Manfred |
description | We present an approach for safe, and object-independent human-to-robot handovers using real time robotic vision, and manipulation. We aim for general applicability with a generic object detector, a fast grasp selection algorithm, and by using a single gripper-mounted RGB-D camera, hence not relying on external sensors. The robot is controlled via visual servoing towards the object of interest. Putting a high emphasis on safety, we use two perception modules: human body part segmentation, and hand/finger segmentation. Pixels that are deemed to belong to the human are filtered out from candidate grasp poses, hence ensuring that the robot safely picks the object without colliding with the human partner. The grasp selection, and perception modules run concurrently in real-time, which allows monitoring of the progress. In experiments with 13 objects, the robot was able to successfully take the object from the human in 81.9% of the trials. |
doi_str_mv | 10.1109/LRA.2020.3026970 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2451189948</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9206048</ieee_id><sourcerecordid>2451189948</sourcerecordid><originalsourceid>FETCH-LOGICAL-c333t-e9b1da72eb97007d1f5a91f0e144dff752f8688d5b887cb9da36f6fb9fbcc6763</originalsourceid><addsrcrecordid>eNpNkE1LAzEQhoMoWGrvgpcFz1snyW6yOZaitlIolNZrSHYnktJu6mYr-O9NbREv88G8887wEHJPYUwpqKfFajJmwGDMgQkl4YoMGJcy51KI63_1LRnFuAUAWjLJVTkgb0u7xbrP522DB0yh7bPZcW_avA_5KtiQWtM24Qu7mG2ibz-yFZpdtvZ7zH7nvs7effShvSM3zuwiji55SDYvz-vpLF8sX-fTySKvOed9jsrSxkiGNj0KsqGuNIo6QFoUjXOyZK4SVdWUtqpkbVVjuHDCWeVsXQsp-JA8nn0PXfg8Yuz1Nhy7Np3UrCgprZQqqqSCs6ruQowdOn3o_N5035qCPkHTCZo-QdMXaGnl4bziEfFPrhgISIY_OzVnig</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2451189948</pqid></control><display><type>article</type><title>Object-Independent Human-to-Robot Handovers Using Real Time Robotic Vision</title><source>IEEE Electronic Library (IEL)</source><creator>Rosenberger, Patrick ; Cosgun, Akansel ; Newbury, Rhys ; Kwan, Jun ; Ortenzi, Valerio ; Corke, Peter ; Grafinger, Manfred</creator><creatorcontrib>Rosenberger, Patrick ; Cosgun, Akansel ; Newbury, Rhys ; Kwan, Jun ; Ortenzi, Valerio ; Corke, Peter ; Grafinger, Manfred</creatorcontrib><description>We present an approach for safe, and object-independent human-to-robot handovers using real time robotic vision, and manipulation. We aim for general applicability with a generic object detector, a fast grasp selection algorithm, and by using a single gripper-mounted RGB-D camera, hence not relying on external sensors. The robot is controlled via visual servoing towards the object of interest. Putting a high emphasis on safety, we use two perception modules: human body part segmentation, and hand/finger segmentation. Pixels that are deemed to belong to the human are filtered out from candidate grasp poses, hence ensuring that the robot safely picks the object without colliding with the human partner. The grasp selection, and perception modules run concurrently in real-time, which allows monitoring of the progress. In experiments with 13 objects, the robot was able to successfully take the object from the human in 81.9% of the trials.</description><identifier>ISSN: 2377-3766</identifier><identifier>EISSN: 2377-3766</identifier><identifier>DOI: 10.1109/LRA.2020.3026970</identifier><identifier>CODEN: IRALC6</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Body parts ; Deep learning in grasping and manipulation ; Grasping ; Handover ; Machine vision ; Modules ; Perception ; perception for grasping and manipulation ; physical human-robot interaction ; Real time ; Real-time systems ; Robot kinematics ; Robots ; Safety ; Segmentation ; Servocontrol ; Visual control</subject><ispartof>IEEE robotics and automation letters, 2021-01, Vol.6 (1), p.17-23</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c333t-e9b1da72eb97007d1f5a91f0e144dff752f8688d5b887cb9da36f6fb9fbcc6763</citedby><cites>FETCH-LOGICAL-c333t-e9b1da72eb97007d1f5a91f0e144dff752f8688d5b887cb9da36f6fb9fbcc6763</cites><orcidid>0000-0002-8194-1616 ; 0000-0001-6650-367X ; 0000-0001-6596-1126 ; 0000-0002-5504-0267</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9206048$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9206048$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Rosenberger, Patrick</creatorcontrib><creatorcontrib>Cosgun, Akansel</creatorcontrib><creatorcontrib>Newbury, Rhys</creatorcontrib><creatorcontrib>Kwan, Jun</creatorcontrib><creatorcontrib>Ortenzi, Valerio</creatorcontrib><creatorcontrib>Corke, Peter</creatorcontrib><creatorcontrib>Grafinger, Manfred</creatorcontrib><title>Object-Independent Human-to-Robot Handovers Using Real Time Robotic Vision</title><title>IEEE robotics and automation letters</title><addtitle>LRA</addtitle><description>We present an approach for safe, and object-independent human-to-robot handovers using real time robotic vision, and manipulation. We aim for general applicability with a generic object detector, a fast grasp selection algorithm, and by using a single gripper-mounted RGB-D camera, hence not relying on external sensors. The robot is controlled via visual servoing towards the object of interest. Putting a high emphasis on safety, we use two perception modules: human body part segmentation, and hand/finger segmentation. Pixels that are deemed to belong to the human are filtered out from candidate grasp poses, hence ensuring that the robot safely picks the object without colliding with the human partner. The grasp selection, and perception modules run concurrently in real-time, which allows monitoring of the progress. In experiments with 13 objects, the robot was able to successfully take the object from the human in 81.9% of the trials.</description><subject>Algorithms</subject><subject>Body parts</subject><subject>Deep learning in grasping and manipulation</subject><subject>Grasping</subject><subject>Handover</subject><subject>Machine vision</subject><subject>Modules</subject><subject>Perception</subject><subject>perception for grasping and manipulation</subject><subject>physical human-robot interaction</subject><subject>Real time</subject><subject>Real-time systems</subject><subject>Robot kinematics</subject><subject>Robots</subject><subject>Safety</subject><subject>Segmentation</subject><subject>Servocontrol</subject><subject>Visual control</subject><issn>2377-3766</issn><issn>2377-3766</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkE1LAzEQhoMoWGrvgpcFz1snyW6yOZaitlIolNZrSHYnktJu6mYr-O9NbREv88G8887wEHJPYUwpqKfFajJmwGDMgQkl4YoMGJcy51KI63_1LRnFuAUAWjLJVTkgb0u7xbrP522DB0yh7bPZcW_avA_5KtiQWtM24Qu7mG2ibz-yFZpdtvZ7zH7nvs7effShvSM3zuwiji55SDYvz-vpLF8sX-fTySKvOed9jsrSxkiGNj0KsqGuNIo6QFoUjXOyZK4SVdWUtqpkbVVjuHDCWeVsXQsp-JA8nn0PXfg8Yuz1Nhy7Np3UrCgprZQqqqSCs6ruQowdOn3o_N5035qCPkHTCZo-QdMXaGnl4bziEfFPrhgISIY_OzVnig</recordid><startdate>202101</startdate><enddate>202101</enddate><creator>Rosenberger, Patrick</creator><creator>Cosgun, Akansel</creator><creator>Newbury, Rhys</creator><creator>Kwan, Jun</creator><creator>Ortenzi, Valerio</creator><creator>Corke, Peter</creator><creator>Grafinger, Manfred</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-8194-1616</orcidid><orcidid>https://orcid.org/0000-0001-6650-367X</orcidid><orcidid>https://orcid.org/0000-0001-6596-1126</orcidid><orcidid>https://orcid.org/0000-0002-5504-0267</orcidid></search><sort><creationdate>202101</creationdate><title>Object-Independent Human-to-Robot Handovers Using Real Time Robotic Vision</title><author>Rosenberger, Patrick ; Cosgun, Akansel ; Newbury, Rhys ; Kwan, Jun ; Ortenzi, Valerio ; Corke, Peter ; Grafinger, Manfred</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c333t-e9b1da72eb97007d1f5a91f0e144dff752f8688d5b887cb9da36f6fb9fbcc6763</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Body parts</topic><topic>Deep learning in grasping and manipulation</topic><topic>Grasping</topic><topic>Handover</topic><topic>Machine vision</topic><topic>Modules</topic><topic>Perception</topic><topic>perception for grasping and manipulation</topic><topic>physical human-robot interaction</topic><topic>Real time</topic><topic>Real-time systems</topic><topic>Robot kinematics</topic><topic>Robots</topic><topic>Safety</topic><topic>Segmentation</topic><topic>Servocontrol</topic><topic>Visual control</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Rosenberger, Patrick</creatorcontrib><creatorcontrib>Cosgun, Akansel</creatorcontrib><creatorcontrib>Newbury, Rhys</creatorcontrib><creatorcontrib>Kwan, Jun</creatorcontrib><creatorcontrib>Ortenzi, Valerio</creatorcontrib><creatorcontrib>Corke, Peter</creatorcontrib><creatorcontrib>Grafinger, Manfred</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE robotics and automation letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Rosenberger, Patrick</au><au>Cosgun, Akansel</au><au>Newbury, Rhys</au><au>Kwan, Jun</au><au>Ortenzi, Valerio</au><au>Corke, Peter</au><au>Grafinger, Manfred</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Object-Independent Human-to-Robot Handovers Using Real Time Robotic Vision</atitle><jtitle>IEEE robotics and automation letters</jtitle><stitle>LRA</stitle><date>2021-01</date><risdate>2021</risdate><volume>6</volume><issue>1</issue><spage>17</spage><epage>23</epage><pages>17-23</pages><issn>2377-3766</issn><eissn>2377-3766</eissn><coden>IRALC6</coden><abstract>We present an approach for safe, and object-independent human-to-robot handovers using real time robotic vision, and manipulation. We aim for general applicability with a generic object detector, a fast grasp selection algorithm, and by using a single gripper-mounted RGB-D camera, hence not relying on external sensors. The robot is controlled via visual servoing towards the object of interest. Putting a high emphasis on safety, we use two perception modules: human body part segmentation, and hand/finger segmentation. Pixels that are deemed to belong to the human are filtered out from candidate grasp poses, hence ensuring that the robot safely picks the object without colliding with the human partner. The grasp selection, and perception modules run concurrently in real-time, which allows monitoring of the progress. In experiments with 13 objects, the robot was able to successfully take the object from the human in 81.9% of the trials.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/LRA.2020.3026970</doi><tpages>7</tpages><orcidid>https://orcid.org/0000-0002-8194-1616</orcidid><orcidid>https://orcid.org/0000-0001-6650-367X</orcidid><orcidid>https://orcid.org/0000-0001-6596-1126</orcidid><orcidid>https://orcid.org/0000-0002-5504-0267</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2377-3766 |
ispartof | IEEE robotics and automation letters, 2021-01, Vol.6 (1), p.17-23 |
issn | 2377-3766 2377-3766 |
language | eng |
recordid | cdi_proquest_journals_2451189948 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Body parts Deep learning in grasping and manipulation Grasping Handover Machine vision Modules Perception perception for grasping and manipulation physical human-robot interaction Real time Real-time systems Robot kinematics Robots Safety Segmentation Servocontrol Visual control |
title | Object-Independent Human-to-Robot Handovers Using Real Time Robotic Vision |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T08%3A37%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Object-Independent%20Human-to-Robot%20Handovers%20Using%20Real%20Time%20Robotic%20Vision&rft.jtitle=IEEE%20robotics%20and%20automation%20letters&rft.au=Rosenberger,%20Patrick&rft.date=2021-01&rft.volume=6&rft.issue=1&rft.spage=17&rft.epage=23&rft.pages=17-23&rft.issn=2377-3766&rft.eissn=2377-3766&rft.coden=IRALC6&rft_id=info:doi/10.1109/LRA.2020.3026970&rft_dat=%3Cproquest_RIE%3E2451189948%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2451189948&rft_id=info:pmid/&rft_ieee_id=9206048&rfr_iscdi=true |