Improving Object Grasp Performance via Transformer-Based Sparse Shape Completion
Currently, robotic grasping methods based on sparse partial point clouds have attained excellent grasping performance on various objects. However, they often generate wrong grasping candidates due to the lack of geometric information on the object. In this work, we propose a novel and robust sparse...
Gespeichert in:
Veröffentlicht in: | Journal of intelligent & robotic systems 2022-03, Vol.104 (3), Article 45 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 3 |
container_start_page | |
container_title | Journal of intelligent & robotic systems |
container_volume | 104 |
creator | Chen, Wenkai Liang, Hongzhuo Chen, Zhaopeng Sun, Fuchun Zhang, Jianwei |
description | Currently, robotic grasping methods based on sparse partial point clouds have attained excellent grasping performance on various objects. However, they often generate wrong grasping candidates due to the lack of geometric information on the object. In this work, we propose a novel and robust sparse shape completion model (TransSC). This model has a transformer-based encoder to explore more point-wise features and a manifold-based decoder to exploit more object details using a segmented partial point cloud as input. Quantitative experiments verify the effectiveness of the proposed shape completion network and demonstrate that our network outperforms existing methods. Besides, TransSC is integrated into a grasp evaluation network to generate a set of grasp candidates. The simulation experiment shows that TransSC improves the grasping generation result compared to the existing shape completion baselines. Furthermore, our robotic experiment shows that with TransSC, the robot is more successful in grasping objects of unknown numbers randomly placed on a support surface. |
doi_str_mv | 10.1007/s10846-022-01586-4 |
format | Article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2633338000</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A729523795</galeid><sourcerecordid>A729523795</sourcerecordid><originalsourceid>FETCH-LOGICAL-c402t-1e7eb839d350fe83bc226b641332ef4d3c33620ce1fb0358829577ae463d714f3</originalsourceid><addsrcrecordid>eNp9kMtqwzAQRUVpoenjB7oydO109LAtL9PQFwQSSLoWsjxKHWLLlZxA_75KXeiu0kIwuufOzCXkjsKUAhQPgYIUeQqMpUAzmafijExoVvAUBJTnZAIlo_G7zC_JVQg7AChlVk7I6q3tvTs23TZZVjs0Q_LideiTFXrrfKs7g8mx0cnG6y6cKujTRx2wTta99gGT9YfuMZm7tt_j0LjuhlxYvQ94-_tek_fnp838NV0sX97ms0VqBLAhpVhgJXlZ8wwsSl4ZxvIqF5RzhlbU3HCeMzBIbQU8k5KVWVFoFDmvCyosvyb3o28c__OAYVA7d_BdbKlYzuORcceomo6qrd6jajrrBq9NvDW2jXEd2ibWZ0V0Z7woswiwETDeheDRqt43rfZfioI6Ra3GqFWMWv1ErUSE-AiFKO626P9m-Yf6BjYlgAs</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2633338000</pqid></control><display><type>article</type><title>Improving Object Grasp Performance via Transformer-Based Sparse Shape Completion</title><source>SpringerLink</source><creator>Chen, Wenkai ; Liang, Hongzhuo ; Chen, Zhaopeng ; Sun, Fuchun ; Zhang, Jianwei</creator><creatorcontrib>Chen, Wenkai ; Liang, Hongzhuo ; Chen, Zhaopeng ; Sun, Fuchun ; Zhang, Jianwei</creatorcontrib><description>Currently, robotic grasping methods based on sparse partial point clouds have attained excellent grasping performance on various objects. However, they often generate wrong grasping candidates due to the lack of geometric information on the object. In this work, we propose a novel and robust sparse shape completion model (TransSC). This model has a transformer-based encoder to explore more point-wise features and a manifold-based decoder to exploit more object details using a segmented partial point cloud as input. Quantitative experiments verify the effectiveness of the proposed shape completion network and demonstrate that our network outperforms existing methods. Besides, TransSC is integrated into a grasp evaluation network to generate a set of grasp candidates. The simulation experiment shows that TransSC improves the grasping generation result compared to the existing shape completion baselines. Furthermore, our robotic experiment shows that with TransSC, the robot is more successful in grasping objects of unknown numbers randomly placed on a support surface.</description><identifier>ISSN: 0921-0296</identifier><identifier>EISSN: 1573-0409</identifier><identifier>DOI: 10.1007/s10846-022-01586-4</identifier><language>eng</language><publisher>Dordrecht: Springer Netherlands</publisher><subject>Algorithms ; Analysis ; Artificial Intelligence ; Coders ; Control ; Datasets ; Deep learning ; Electrical Engineering ; Engineering ; Experiments ; Grasping (robotics) ; label V ; Mechanical Engineering ; Mechatronics ; Regular Paper ; Robotics ; Robots ; Semantics ; Sensors ; Simulation ; Topical collection on Robotics Vision and Intelligent Control ; Transformers</subject><ispartof>Journal of intelligent & robotic systems, 2022-03, Vol.104 (3), Article 45</ispartof><rights>The Author(s) 2022</rights><rights>COPYRIGHT 2022 Springer</rights><rights>The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c402t-1e7eb839d350fe83bc226b641332ef4d3c33620ce1fb0358829577ae463d714f3</citedby><cites>FETCH-LOGICAL-c402t-1e7eb839d350fe83bc226b641332ef4d3c33620ce1fb0358829577ae463d714f3</cites><orcidid>0000-0002-6870-9898 ; 0000-0003-0169-8896</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10846-022-01586-4$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10846-022-01586-4$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,27903,27904,41467,42536,51298</link.rule.ids></links><search><creatorcontrib>Chen, Wenkai</creatorcontrib><creatorcontrib>Liang, Hongzhuo</creatorcontrib><creatorcontrib>Chen, Zhaopeng</creatorcontrib><creatorcontrib>Sun, Fuchun</creatorcontrib><creatorcontrib>Zhang, Jianwei</creatorcontrib><title>Improving Object Grasp Performance via Transformer-Based Sparse Shape Completion</title><title>Journal of intelligent & robotic systems</title><addtitle>J Intell Robot Syst</addtitle><description>Currently, robotic grasping methods based on sparse partial point clouds have attained excellent grasping performance on various objects. However, they often generate wrong grasping candidates due to the lack of geometric information on the object. In this work, we propose a novel and robust sparse shape completion model (TransSC). This model has a transformer-based encoder to explore more point-wise features and a manifold-based decoder to exploit more object details using a segmented partial point cloud as input. Quantitative experiments verify the effectiveness of the proposed shape completion network and demonstrate that our network outperforms existing methods. Besides, TransSC is integrated into a grasp evaluation network to generate a set of grasp candidates. The simulation experiment shows that TransSC improves the grasping generation result compared to the existing shape completion baselines. Furthermore, our robotic experiment shows that with TransSC, the robot is more successful in grasping objects of unknown numbers randomly placed on a support surface.</description><subject>Algorithms</subject><subject>Analysis</subject><subject>Artificial Intelligence</subject><subject>Coders</subject><subject>Control</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Electrical Engineering</subject><subject>Engineering</subject><subject>Experiments</subject><subject>Grasping (robotics)</subject><subject>label V</subject><subject>Mechanical Engineering</subject><subject>Mechatronics</subject><subject>Regular Paper</subject><subject>Robotics</subject><subject>Robots</subject><subject>Semantics</subject><subject>Sensors</subject><subject>Simulation</subject><subject>Topical collection on Robotics Vision and Intelligent Control</subject><subject>Transformers</subject><issn>0921-0296</issn><issn>1573-0409</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kMtqwzAQRUVpoenjB7oydO109LAtL9PQFwQSSLoWsjxKHWLLlZxA_75KXeiu0kIwuufOzCXkjsKUAhQPgYIUeQqMpUAzmafijExoVvAUBJTnZAIlo_G7zC_JVQg7AChlVk7I6q3tvTs23TZZVjs0Q_LideiTFXrrfKs7g8mx0cnG6y6cKujTRx2wTta99gGT9YfuMZm7tt_j0LjuhlxYvQ94-_tek_fnp838NV0sX97ms0VqBLAhpVhgJXlZ8wwsSl4ZxvIqF5RzhlbU3HCeMzBIbQU8k5KVWVFoFDmvCyosvyb3o28c__OAYVA7d_BdbKlYzuORcceomo6qrd6jajrrBq9NvDW2jXEd2ibWZ0V0Z7woswiwETDeheDRqt43rfZfioI6Ra3GqFWMWv1ErUSE-AiFKO626P9m-Yf6BjYlgAs</recordid><startdate>20220301</startdate><enddate>20220301</enddate><creator>Chen, Wenkai</creator><creator>Liang, Hongzhuo</creator><creator>Chen, Zhaopeng</creator><creator>Sun, Fuchun</creator><creator>Zhang, Jianwei</creator><general>Springer Netherlands</general><general>Springer</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>7XB</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0N</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-6870-9898</orcidid><orcidid>https://orcid.org/0000-0003-0169-8896</orcidid></search><sort><creationdate>20220301</creationdate><title>Improving Object Grasp Performance via Transformer-Based Sparse Shape Completion</title><author>Chen, Wenkai ; Liang, Hongzhuo ; Chen, Zhaopeng ; Sun, Fuchun ; Zhang, Jianwei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c402t-1e7eb839d350fe83bc226b641332ef4d3c33620ce1fb0358829577ae463d714f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Analysis</topic><topic>Artificial Intelligence</topic><topic>Coders</topic><topic>Control</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Electrical Engineering</topic><topic>Engineering</topic><topic>Experiments</topic><topic>Grasping (robotics)</topic><topic>label V</topic><topic>Mechanical Engineering</topic><topic>Mechatronics</topic><topic>Regular Paper</topic><topic>Robotics</topic><topic>Robots</topic><topic>Semantics</topic><topic>Sensors</topic><topic>Simulation</topic><topic>Topical collection on Robotics Vision and Intelligent Control</topic><topic>Transformers</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chen, Wenkai</creatorcontrib><creatorcontrib>Liang, Hongzhuo</creatorcontrib><creatorcontrib>Chen, Zhaopeng</creatorcontrib><creatorcontrib>Sun, Fuchun</creatorcontrib><creatorcontrib>Zhang, Jianwei</creatorcontrib><collection>SpringerOpen</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Database (1962 - current)</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Engineering Research Database</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer science database</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>ProQuest advanced technologies & aerospace journals</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection><collection>ProQuest Central Basic</collection><jtitle>Journal of intelligent & robotic systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Wenkai</au><au>Liang, Hongzhuo</au><au>Chen, Zhaopeng</au><au>Sun, Fuchun</au><au>Zhang, Jianwei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Improving Object Grasp Performance via Transformer-Based Sparse Shape Completion</atitle><jtitle>Journal of intelligent & robotic systems</jtitle><stitle>J Intell Robot Syst</stitle><date>2022-03-01</date><risdate>2022</risdate><volume>104</volume><issue>3</issue><artnum>45</artnum><issn>0921-0296</issn><eissn>1573-0409</eissn><abstract>Currently, robotic grasping methods based on sparse partial point clouds have attained excellent grasping performance on various objects. However, they often generate wrong grasping candidates due to the lack of geometric information on the object. In this work, we propose a novel and robust sparse shape completion model (TransSC). This model has a transformer-based encoder to explore more point-wise features and a manifold-based decoder to exploit more object details using a segmented partial point cloud as input. Quantitative experiments verify the effectiveness of the proposed shape completion network and demonstrate that our network outperforms existing methods. Besides, TransSC is integrated into a grasp evaluation network to generate a set of grasp candidates. The simulation experiment shows that TransSC improves the grasping generation result compared to the existing shape completion baselines. Furthermore, our robotic experiment shows that with TransSC, the robot is more successful in grasping objects of unknown numbers randomly placed on a support surface.</abstract><cop>Dordrecht</cop><pub>Springer Netherlands</pub><doi>10.1007/s10846-022-01586-4</doi><orcidid>https://orcid.org/0000-0002-6870-9898</orcidid><orcidid>https://orcid.org/0000-0003-0169-8896</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0921-0296 |
ispartof | Journal of intelligent & robotic systems, 2022-03, Vol.104 (3), Article 45 |
issn | 0921-0296 1573-0409 |
language | eng |
recordid | cdi_proquest_journals_2633338000 |
source | SpringerLink |
subjects | Algorithms Analysis Artificial Intelligence Coders Control Datasets Deep learning Electrical Engineering Engineering Experiments Grasping (robotics) label V Mechanical Engineering Mechatronics Regular Paper Robotics Robots Semantics Sensors Simulation Topical collection on Robotics Vision and Intelligent Control Transformers |
title | Improving Object Grasp Performance via Transformer-Based Sparse Shape Completion |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T19%3A34%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Improving%20Object%20Grasp%20Performance%20via%20Transformer-Based%20Sparse%20Shape%20Completion&rft.jtitle=Journal%20of%20intelligent%20&%20robotic%20systems&rft.au=Chen,%20Wenkai&rft.date=2022-03-01&rft.volume=104&rft.issue=3&rft.artnum=45&rft.issn=0921-0296&rft.eissn=1573-0409&rft_id=info:doi/10.1007/s10846-022-01586-4&rft_dat=%3Cgale_proqu%3EA729523795%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2633338000&rft_id=info:pmid/&rft_galeid=A729523795&rfr_iscdi=true |