Six-Dimensional Target Pose Estimation for Robot Autonomous Manipulation: Methodology and Verification
The autonomous and precise grasping operation of robots is considered challenging in situations where there are different objects with different shapes and postures. In this study, we proposed a method of 6-D target pose estimation for robot autonomous manipulation. The proposed method is based on:...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on cognitive and developmental systems 2023-03, Vol.15 (1), p.186-197 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 197 |
---|---|
container_issue | 1 |
container_start_page | 186 |
container_title | IEEE transactions on cognitive and developmental systems |
container_volume | 15 |
creator | Wang, Rui Su, Congjia Yu, Hao Wang, Shuo |
description | The autonomous and precise grasping operation of robots is considered challenging in situations where there are different objects with different shapes and postures. In this study, we proposed a method of 6-D target pose estimation for robot autonomous manipulation. The proposed method is based on: 1) a fully convolutional neural network for scene semantic segmentation and 2) fast global registration to achieve target pose estimation. To verify the validity of the proposed algorithm, we built a robot grasping operation system and used the point cloud model of the target object and its pose estimation results to generate the robot grasping posture control strategy. Experimental results showed that the proposed method can achieve a six-degree-of-freedom pose estimation for arbitrarily placed target objects and complete the autonomous grasping of the target. Comparative experiments demonstrated that the proposed target pose estimation method achieved a significant improvement in average accuracy and real-time performance compared with traditional methods. |
doi_str_mv | 10.1109/TCDS.2022.3151331 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2784550555</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9714272</ieee_id><sourcerecordid>2784550555</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-4396937d3864f39542f73ea8df51f12df011b5b98a86e47ae276ae0f485750a53</originalsourceid><addsrcrecordid>eNo9kE1rAjEURUNpodL6A0o3ga7H5nMy6U7UfoDSUm23ITqJjYwTm2Sg_vvOqLh6D965D-4B4A6jAcZIPi5G4_mAIEIGFHNMKb4APUKFzApJ5eV5J-ga9GPcIIRwTkXBRA_YufvLxm5r6uh8rSu40GFtEvzw0cBJTG6rU3uA1gf46Zc-wWGTfO23volwpmu3a6oD8QRnJv340ld-vYe6LuG3Cc661eF6C66srqLpn-YN-HqeLEav2fT95W00nGYrImnKGJW5pKKkRc4slZwRK6jRRWk5tpiUFmG85EtZ6CI3TGhDRK4NsqzggiPN6Q14OP7dBf_bmJjUxjeh7RUVaQtzjjjvKHykVsHHGIxVu9AWDXuFkeqMqs6o6oyqk9E2c3_MOGPMmZcCMyII_QdkU3Ih</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2784550555</pqid></control><display><type>article</type><title>Six-Dimensional Target Pose Estimation for Robot Autonomous Manipulation: Methodology and Verification</title><source>IEEE Electronic Library (IEL)</source><creator>Wang, Rui ; Su, Congjia ; Yu, Hao ; Wang, Shuo</creator><creatorcontrib>Wang, Rui ; Su, Congjia ; Yu, Hao ; Wang, Shuo</creatorcontrib><description>The autonomous and precise grasping operation of robots is considered challenging in situations where there are different objects with different shapes and postures. In this study, we proposed a method of 6-D target pose estimation for robot autonomous manipulation. The proposed method is based on: 1) a fully convolutional neural network for scene semantic segmentation and 2) fast global registration to achieve target pose estimation. To verify the validity of the proposed algorithm, we built a robot grasping operation system and used the point cloud model of the target object and its pose estimation results to generate the robot grasping posture control strategy. Experimental results showed that the proposed method can achieve a six-degree-of-freedom pose estimation for arbitrarily placed target objects and complete the autonomous grasping of the target. Comparative experiments demonstrated that the proposed target pose estimation method achieved a significant improvement in average accuracy and real-time performance compared with traditional methods.</description><identifier>ISSN: 2379-8920</identifier><identifier>EISSN: 2379-8939</identifier><identifier>DOI: 10.1109/TCDS.2022.3151331</identifier><identifier>CODEN: ITCDA4</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Artificial neural networks ; Autonomous manipulation ; Convolution ; Grasping ; Grasping (robotics) ; Image segmentation ; Point cloud compression ; Pose estimation ; robot ; Robot control ; Robots ; semantic segmentation ; Semantics ; target pose estimation ; Three dimensional models</subject><ispartof>IEEE transactions on cognitive and developmental systems, 2023-03, Vol.15 (1), p.186-197</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-4396937d3864f39542f73ea8df51f12df011b5b98a86e47ae276ae0f485750a53</citedby><cites>FETCH-LOGICAL-c293t-4396937d3864f39542f73ea8df51f12df011b5b98a86e47ae276ae0f485750a53</cites><orcidid>0000-0002-1390-9219 ; 0000-0003-3172-3167</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9714272$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54736</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9714272$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Wang, Rui</creatorcontrib><creatorcontrib>Su, Congjia</creatorcontrib><creatorcontrib>Yu, Hao</creatorcontrib><creatorcontrib>Wang, Shuo</creatorcontrib><title>Six-Dimensional Target Pose Estimation for Robot Autonomous Manipulation: Methodology and Verification</title><title>IEEE transactions on cognitive and developmental systems</title><addtitle>TCDS</addtitle><description>The autonomous and precise grasping operation of robots is considered challenging in situations where there are different objects with different shapes and postures. In this study, we proposed a method of 6-D target pose estimation for robot autonomous manipulation. The proposed method is based on: 1) a fully convolutional neural network for scene semantic segmentation and 2) fast global registration to achieve target pose estimation. To verify the validity of the proposed algorithm, we built a robot grasping operation system and used the point cloud model of the target object and its pose estimation results to generate the robot grasping posture control strategy. Experimental results showed that the proposed method can achieve a six-degree-of-freedom pose estimation for arbitrarily placed target objects and complete the autonomous grasping of the target. Comparative experiments demonstrated that the proposed target pose estimation method achieved a significant improvement in average accuracy and real-time performance compared with traditional methods.</description><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Autonomous manipulation</subject><subject>Convolution</subject><subject>Grasping</subject><subject>Grasping (robotics)</subject><subject>Image segmentation</subject><subject>Point cloud compression</subject><subject>Pose estimation</subject><subject>robot</subject><subject>Robot control</subject><subject>Robots</subject><subject>semantic segmentation</subject><subject>Semantics</subject><subject>target pose estimation</subject><subject>Three dimensional models</subject><issn>2379-8920</issn><issn>2379-8939</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE1rAjEURUNpodL6A0o3ga7H5nMy6U7UfoDSUm23ITqJjYwTm2Sg_vvOqLh6D965D-4B4A6jAcZIPi5G4_mAIEIGFHNMKb4APUKFzApJ5eV5J-ga9GPcIIRwTkXBRA_YufvLxm5r6uh8rSu40GFtEvzw0cBJTG6rU3uA1gf46Zc-wWGTfO23volwpmu3a6oD8QRnJv340ld-vYe6LuG3Cc661eF6C66srqLpn-YN-HqeLEav2fT95W00nGYrImnKGJW5pKKkRc4slZwRK6jRRWk5tpiUFmG85EtZ6CI3TGhDRK4NsqzggiPN6Q14OP7dBf_bmJjUxjeh7RUVaQtzjjjvKHykVsHHGIxVu9AWDXuFkeqMqs6o6oyqk9E2c3_MOGPMmZcCMyII_QdkU3Ih</recordid><startdate>20230301</startdate><enddate>20230301</enddate><creator>Wang, Rui</creator><creator>Su, Congjia</creator><creator>Yu, Hao</creator><creator>Wang, Shuo</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-1390-9219</orcidid><orcidid>https://orcid.org/0000-0003-3172-3167</orcidid></search><sort><creationdate>20230301</creationdate><title>Six-Dimensional Target Pose Estimation for Robot Autonomous Manipulation: Methodology and Verification</title><author>Wang, Rui ; Su, Congjia ; Yu, Hao ; Wang, Shuo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-4396937d3864f39542f73ea8df51f12df011b5b98a86e47ae276ae0f485750a53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Autonomous manipulation</topic><topic>Convolution</topic><topic>Grasping</topic><topic>Grasping (robotics)</topic><topic>Image segmentation</topic><topic>Point cloud compression</topic><topic>Pose estimation</topic><topic>robot</topic><topic>Robot control</topic><topic>Robots</topic><topic>semantic segmentation</topic><topic>Semantics</topic><topic>target pose estimation</topic><topic>Three dimensional models</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Rui</creatorcontrib><creatorcontrib>Su, Congjia</creatorcontrib><creatorcontrib>Yu, Hao</creatorcontrib><creatorcontrib>Wang, Shuo</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on cognitive and developmental systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Rui</au><au>Su, Congjia</au><au>Yu, Hao</au><au>Wang, Shuo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Six-Dimensional Target Pose Estimation for Robot Autonomous Manipulation: Methodology and Verification</atitle><jtitle>IEEE transactions on cognitive and developmental systems</jtitle><stitle>TCDS</stitle><date>2023-03-01</date><risdate>2023</risdate><volume>15</volume><issue>1</issue><spage>186</spage><epage>197</epage><pages>186-197</pages><issn>2379-8920</issn><eissn>2379-8939</eissn><coden>ITCDA4</coden><abstract>The autonomous and precise grasping operation of robots is considered challenging in situations where there are different objects with different shapes and postures. In this study, we proposed a method of 6-D target pose estimation for robot autonomous manipulation. The proposed method is based on: 1) a fully convolutional neural network for scene semantic segmentation and 2) fast global registration to achieve target pose estimation. To verify the validity of the proposed algorithm, we built a robot grasping operation system and used the point cloud model of the target object and its pose estimation results to generate the robot grasping posture control strategy. Experimental results showed that the proposed method can achieve a six-degree-of-freedom pose estimation for arbitrarily placed target objects and complete the autonomous grasping of the target. Comparative experiments demonstrated that the proposed target pose estimation method achieved a significant improvement in average accuracy and real-time performance compared with traditional methods.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TCDS.2022.3151331</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-1390-9219</orcidid><orcidid>https://orcid.org/0000-0003-3172-3167</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2379-8920 |
ispartof | IEEE transactions on cognitive and developmental systems, 2023-03, Vol.15 (1), p.186-197 |
issn | 2379-8920 2379-8939 |
language | eng |
recordid | cdi_proquest_journals_2784550555 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Artificial neural networks Autonomous manipulation Convolution Grasping Grasping (robotics) Image segmentation Point cloud compression Pose estimation robot Robot control Robots semantic segmentation Semantics target pose estimation Three dimensional models |
title | Six-Dimensional Target Pose Estimation for Robot Autonomous Manipulation: Methodology and Verification |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T00%3A05%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Six-Dimensional%20Target%20Pose%20Estimation%20for%20Robot%20Autonomous%20Manipulation:%20Methodology%20and%20Verification&rft.jtitle=IEEE%20transactions%20on%20cognitive%20and%20developmental%20systems&rft.au=Wang,%20Rui&rft.date=2023-03-01&rft.volume=15&rft.issue=1&rft.spage=186&rft.epage=197&rft.pages=186-197&rft.issn=2379-8920&rft.eissn=2379-8939&rft.coden=ITCDA4&rft_id=info:doi/10.1109/TCDS.2022.3151331&rft_dat=%3Cproquest_RIE%3E2784550555%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2784550555&rft_id=info:pmid/&rft_ieee_id=9714272&rfr_iscdi=true |