Deep Learning-Based Ensemble Approach for Autonomous Object Manipulation with an Anthropomorphic Soft Robot Hand

Autonomous object manipulation is a challenging task in robotics because it requires an essential understanding of the object’s parameters such as position, 3D shape, grasping (i.e., touching) areas, and orientation. This work presents an autonomous object manipulation system using an anthropomorphi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Electronics (Basel) 2024-01, Vol.13 (2), p.379
Hauptverfasser: Valarezo Añazco, Edwin, Guerrero, Sara, Rivera Lopez, Patricio, Oh, Ji-Heon, Ryu, Ga-Hyeon, Kim, Tae-Seong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 2
container_start_page 379
container_title Electronics (Basel)
container_volume 13
creator Valarezo Añazco, Edwin
Guerrero, Sara
Rivera Lopez, Patricio
Oh, Ji-Heon
Ryu, Ga-Hyeon
Kim, Tae-Seong
description Autonomous object manipulation is a challenging task in robotics because it requires an essential understanding of the object’s parameters such as position, 3D shape, grasping (i.e., touching) areas, and orientation. This work presents an autonomous object manipulation system using an anthropomorphic soft robot hand with deep learning (DL) vision intelligence for object detection, 3D shape reconstruction, and object grasping area generation. Object detection is performed using Faster-RCNN and an RGB-D sensor to produce a partial depth view of the objects randomly located in the working space. Three-dimensional object shape reconstruction is performed using U-Net based on 3D convolutions with bottle-neck layers and skip connections generating a complete 3D shape of the object from the sensed single-depth view. Then, the grasping position and orientation are computed based on the reconstructed 3D object information (e.g., object shape and size) using U-Net based on 3D convolutions and Principal Component Analysis (PCA), respectively. The proposed autonomous object manipulation system is evaluated by grasping and relocating twelve objects not included in the training database, achieving an average of 95% successful object grasping and 93% object relocations.
doi_str_mv 10.3390/electronics13020379
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2918725265</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A780874454</galeid><sourcerecordid>A780874454</sourcerecordid><originalsourceid>FETCH-LOGICAL-c311t-2c10f978f0443796cfcb87e276dc81244faddc2e151315b66e303ea9b4f4618b3</originalsourceid><addsrcrecordid>eNptUctOwzAQjBBIVNAv4GKJc4pfefgYyqNIRZV4nCPHWbeuUjvYjhB_j1E5cGD3sKvVzI5Gk2VXBC8YE_gGBlDRO2tUIAxTzCpxks0orkQuqKCnf_bzbB7CHqcShNUMz7LxDmBEa5DeGrvNb2WAHt3bAIduANSMo3dS7ZB2HjVTdNYd3BTQptsnTfQsrRmnQUbjLPo0cYekRY2NO-_GBPTjzij06nREL65zEa2k7S-zMy2HAPPfeZG9P9y_LVf5evP4tGzWuWKExJwqgrWoao05T4ZKpVVXV0Crslc1oZxr2feKAikII0VXlsAwAyk6rnlJ6o5dZNfHv8nBxwQhtns3eZskWypIXdGClkVCLY6orRygNVa76KVK3cPBKGdBm3RvqhrXFecFTwR2JCjvQvCg29Gbg_RfLcHtTxztP3Gwb6VwgTQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2918725265</pqid></control><display><type>article</type><title>Deep Learning-Based Ensemble Approach for Autonomous Object Manipulation with an Anthropomorphic Soft Robot Hand</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Valarezo Añazco, Edwin ; Guerrero, Sara ; Rivera Lopez, Patricio ; Oh, Ji-Heon ; Ryu, Ga-Hyeon ; Kim, Tae-Seong</creator><creatorcontrib>Valarezo Añazco, Edwin ; Guerrero, Sara ; Rivera Lopez, Patricio ; Oh, Ji-Heon ; Ryu, Ga-Hyeon ; Kim, Tae-Seong</creatorcontrib><description>Autonomous object manipulation is a challenging task in robotics because it requires an essential understanding of the object’s parameters such as position, 3D shape, grasping (i.e., touching) areas, and orientation. This work presents an autonomous object manipulation system using an anthropomorphic soft robot hand with deep learning (DL) vision intelligence for object detection, 3D shape reconstruction, and object grasping area generation. Object detection is performed using Faster-RCNN and an RGB-D sensor to produce a partial depth view of the objects randomly located in the working space. Three-dimensional object shape reconstruction is performed using U-Net based on 3D convolutions with bottle-neck layers and skip connections generating a complete 3D shape of the object from the sensed single-depth view. Then, the grasping position and orientation are computed based on the reconstructed 3D object information (e.g., object shape and size) using U-Net based on 3D convolutions and Principal Component Analysis (PCA), respectively. The proposed autonomous object manipulation system is evaluated by grasping and relocating twelve objects not included in the training database, achieving an average of 95% successful object grasping and 93% object relocations.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics13020379</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Algorithms ; Anthropomorphism ; Cameras ; Control systems ; Deep learning ; Design and construction ; End effectors ; Grasping ; Machine learning ; Neural networks ; Object recognition ; Principal components analysis ; Reconstruction ; Robot arms ; Robot learning ; Robots ; Sensors ; Soft robotics</subject><ispartof>Electronics (Basel), 2024-01, Vol.13 (2), p.379</ispartof><rights>COPYRIGHT 2024 MDPI AG</rights><rights>2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c311t-2c10f978f0443796cfcb87e276dc81244faddc2e151315b66e303ea9b4f4618b3</cites><orcidid>0000-0003-0077-8528 ; 0000-0001-7118-1708</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Valarezo Añazco, Edwin</creatorcontrib><creatorcontrib>Guerrero, Sara</creatorcontrib><creatorcontrib>Rivera Lopez, Patricio</creatorcontrib><creatorcontrib>Oh, Ji-Heon</creatorcontrib><creatorcontrib>Ryu, Ga-Hyeon</creatorcontrib><creatorcontrib>Kim, Tae-Seong</creatorcontrib><title>Deep Learning-Based Ensemble Approach for Autonomous Object Manipulation with an Anthropomorphic Soft Robot Hand</title><title>Electronics (Basel)</title><description>Autonomous object manipulation is a challenging task in robotics because it requires an essential understanding of the object’s parameters such as position, 3D shape, grasping (i.e., touching) areas, and orientation. This work presents an autonomous object manipulation system using an anthropomorphic soft robot hand with deep learning (DL) vision intelligence for object detection, 3D shape reconstruction, and object grasping area generation. Object detection is performed using Faster-RCNN and an RGB-D sensor to produce a partial depth view of the objects randomly located in the working space. Three-dimensional object shape reconstruction is performed using U-Net based on 3D convolutions with bottle-neck layers and skip connections generating a complete 3D shape of the object from the sensed single-depth view. Then, the grasping position and orientation are computed based on the reconstructed 3D object information (e.g., object shape and size) using U-Net based on 3D convolutions and Principal Component Analysis (PCA), respectively. The proposed autonomous object manipulation system is evaluated by grasping and relocating twelve objects not included in the training database, achieving an average of 95% successful object grasping and 93% object relocations.</description><subject>Algorithms</subject><subject>Anthropomorphism</subject><subject>Cameras</subject><subject>Control systems</subject><subject>Deep learning</subject><subject>Design and construction</subject><subject>End effectors</subject><subject>Grasping</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Principal components analysis</subject><subject>Reconstruction</subject><subject>Robot arms</subject><subject>Robot learning</subject><subject>Robots</subject><subject>Sensors</subject><subject>Soft robotics</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNptUctOwzAQjBBIVNAv4GKJc4pfefgYyqNIRZV4nCPHWbeuUjvYjhB_j1E5cGD3sKvVzI5Gk2VXBC8YE_gGBlDRO2tUIAxTzCpxks0orkQuqKCnf_bzbB7CHqcShNUMz7LxDmBEa5DeGrvNb2WAHt3bAIduANSMo3dS7ZB2HjVTdNYd3BTQptsnTfQsrRmnQUbjLPo0cYekRY2NO-_GBPTjzij06nREL65zEa2k7S-zMy2HAPPfeZG9P9y_LVf5evP4tGzWuWKExJwqgrWoao05T4ZKpVVXV0Crslc1oZxr2feKAikII0VXlsAwAyk6rnlJ6o5dZNfHv8nBxwQhtns3eZskWypIXdGClkVCLY6orRygNVa76KVK3cPBKGdBm3RvqhrXFecFTwR2JCjvQvCg29Gbg_RfLcHtTxztP3Gwb6VwgTQ</recordid><startdate>20240101</startdate><enddate>20240101</enddate><creator>Valarezo Añazco, Edwin</creator><creator>Guerrero, Sara</creator><creator>Rivera Lopez, Patricio</creator><creator>Oh, Ji-Heon</creator><creator>Ryu, Ga-Hyeon</creator><creator>Kim, Tae-Seong</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><orcidid>https://orcid.org/0000-0003-0077-8528</orcidid><orcidid>https://orcid.org/0000-0001-7118-1708</orcidid></search><sort><creationdate>20240101</creationdate><title>Deep Learning-Based Ensemble Approach for Autonomous Object Manipulation with an Anthropomorphic Soft Robot Hand</title><author>Valarezo Añazco, Edwin ; Guerrero, Sara ; Rivera Lopez, Patricio ; Oh, Ji-Heon ; Ryu, Ga-Hyeon ; Kim, Tae-Seong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c311t-2c10f978f0443796cfcb87e276dc81244faddc2e151315b66e303ea9b4f4618b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Anthropomorphism</topic><topic>Cameras</topic><topic>Control systems</topic><topic>Deep learning</topic><topic>Design and construction</topic><topic>End effectors</topic><topic>Grasping</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Principal components analysis</topic><topic>Reconstruction</topic><topic>Robot arms</topic><topic>Robot learning</topic><topic>Robots</topic><topic>Sensors</topic><topic>Soft robotics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Valarezo Añazco, Edwin</creatorcontrib><creatorcontrib>Guerrero, Sara</creatorcontrib><creatorcontrib>Rivera Lopez, Patricio</creatorcontrib><creatorcontrib>Oh, Ji-Heon</creatorcontrib><creatorcontrib>Ryu, Ga-Hyeon</creatorcontrib><creatorcontrib>Kim, Tae-Seong</creatorcontrib><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Valarezo Añazco, Edwin</au><au>Guerrero, Sara</au><au>Rivera Lopez, Patricio</au><au>Oh, Ji-Heon</au><au>Ryu, Ga-Hyeon</au><au>Kim, Tae-Seong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Learning-Based Ensemble Approach for Autonomous Object Manipulation with an Anthropomorphic Soft Robot Hand</atitle><jtitle>Electronics (Basel)</jtitle><date>2024-01-01</date><risdate>2024</risdate><volume>13</volume><issue>2</issue><spage>379</spage><pages>379-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>Autonomous object manipulation is a challenging task in robotics because it requires an essential understanding of the object’s parameters such as position, 3D shape, grasping (i.e., touching) areas, and orientation. This work presents an autonomous object manipulation system using an anthropomorphic soft robot hand with deep learning (DL) vision intelligence for object detection, 3D shape reconstruction, and object grasping area generation. Object detection is performed using Faster-RCNN and an RGB-D sensor to produce a partial depth view of the objects randomly located in the working space. Three-dimensional object shape reconstruction is performed using U-Net based on 3D convolutions with bottle-neck layers and skip connections generating a complete 3D shape of the object from the sensed single-depth view. Then, the grasping position and orientation are computed based on the reconstructed 3D object information (e.g., object shape and size) using U-Net based on 3D convolutions and Principal Component Analysis (PCA), respectively. The proposed autonomous object manipulation system is evaluated by grasping and relocating twelve objects not included in the training database, achieving an average of 95% successful object grasping and 93% object relocations.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics13020379</doi><orcidid>https://orcid.org/0000-0003-0077-8528</orcidid><orcidid>https://orcid.org/0000-0001-7118-1708</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2079-9292
ispartof Electronics (Basel), 2024-01, Vol.13 (2), p.379
issn 2079-9292
2079-9292
language eng
recordid cdi_proquest_journals_2918725265
source MDPI - Multidisciplinary Digital Publishing Institute; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals
subjects Algorithms
Anthropomorphism
Cameras
Control systems
Deep learning
Design and construction
End effectors
Grasping
Machine learning
Neural networks
Object recognition
Principal components analysis
Reconstruction
Robot arms
Robot learning
Robots
Sensors
Soft robotics
title Deep Learning-Based Ensemble Approach for Autonomous Object Manipulation with an Anthropomorphic Soft Robot Hand
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-11T12%3A36%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Learning-Based%20Ensemble%20Approach%20for%20Autonomous%20Object%20Manipulation%20with%20an%20Anthropomorphic%20Soft%20Robot%20Hand&rft.jtitle=Electronics%20(Basel)&rft.au=Valarezo%20A%C3%B1azco,%20Edwin&rft.date=2024-01-01&rft.volume=13&rft.issue=2&rft.spage=379&rft.pages=379-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics13020379&rft_dat=%3Cgale_proqu%3EA780874454%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2918725265&rft_id=info:pmid/&rft_galeid=A780874454&rfr_iscdi=true