An Improved SAC-Based Deep Reinforcement Learning Framework for Collaborative Pushing and Grasping in Underwater Environments
Autonomous grasping is a fundamental task for underwater robots, but direct grasping for tightly stacked objects will lead to collisions and grasp failures, which requires pushing actions to separate the target object and increase grasp success (GS) rates. Hence, this article proposes a novel approa...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on instrumentation and measurement 2024, Vol.73, p.1-14 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 14 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE transactions on instrumentation and measurement |
container_volume | 73 |
creator | Gao, Jian Li, Yufeng Chen, Yimin He, Yaozhen Guo, Jingwei |
description | Autonomous grasping is a fundamental task for underwater robots, but direct grasping for tightly stacked objects will lead to collisions and grasp failures, which requires pushing actions to separate the target object and increase grasp success (GS) rates. Hence, this article proposes a novel approach by employing an improved soft actor-critic (SAC) algorithm within a deep reinforcement learning (RL) framework for achieving collaborative pushing and grasping actions. The developed scheme employs an end-to-end control strategy that maps input images to actions. Specifically, an attention mechanism is introduced in the visual perception module to extract the necessary features for pushing and grasping actions to enhance the training strategy. Moreover, a novel pushing reward function is designed, comprising a per-object distribution function around the target and a global object distribution assessment network named PA-Net. Furthermore, an enhanced experience replay strategy is introduced to address the sparsity issue of grasp action rewards. Finally, a training environment for underwater manipulators is established, in which variations in light, water flow noise, and pressure effects are incorporated to simulate underwater work conditions more realistically. The simulation and real-world experiments demonstrate that the proposed learning strategy efficiently separates target objects and avoids inefficient pushing actions, achieving a significantly higher GS rate. |
doi_str_mv | 10.1109/TIM.2024.3379048 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_10474399</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10474399</ieee_id><sourcerecordid>3015043157</sourcerecordid><originalsourceid>FETCH-LOGICAL-c245t-a831bcde811ce85ce141a1f32d1d48cb66ce92db39b70537adb094fd971567343</originalsourceid><addsrcrecordid>eNpNkDFPwzAQhS0EEqWwMzBYYk6xYzuOx1LaUqkIBO0cOfEFAo0d7LQVA_-dRO3AdHe6773TPYSuKRlRStTdavE0iknMR4xJRXh6ggZUCBmpJIlP0YAQmkaKi-QcXYTwSQiRCZcD9Du2eFE33u3A4LfxJLrXoeseABr8CpUtnS-gBtviJWhvK_uOZ17XsHf-C3dLPHGbjc6d1221A_yyDR89o63Bc69D0w-VxWtrwO91Cx5P7a7yzvae4RKdlXoT4OpYh2g9m64mj9Hyeb6YjJdREXPRRjplNC8MpJQWkIoCKKealiw21PC0yJOkABWbnKlcEsGkNjlRvDRKUpFIxtkQ3R58u0e_txDa7NNtve1OZoxQQTijQnYUOVCFdyF4KLPGV7X2PxklWR9y1oWc9SFnx5A7yc1BUgHAP5xLzpRif-XwebU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3015043157</pqid></control><display><type>article</type><title>An Improved SAC-Based Deep Reinforcement Learning Framework for Collaborative Pushing and Grasping in Underwater Environments</title><source>IEEE Electronic Library (IEL)</source><creator>Gao, Jian ; Li, Yufeng ; Chen, Yimin ; He, Yaozhen ; Guo, Jingwei</creator><creatorcontrib>Gao, Jian ; Li, Yufeng ; Chen, Yimin ; He, Yaozhen ; Guo, Jingwei</creatorcontrib><description>Autonomous grasping is a fundamental task for underwater robots, but direct grasping for tightly stacked objects will lead to collisions and grasp failures, which requires pushing actions to separate the target object and increase grasp success (GS) rates. Hence, this article proposes a novel approach by employing an improved soft actor-critic (SAC) algorithm within a deep reinforcement learning (RL) framework for achieving collaborative pushing and grasping actions. The developed scheme employs an end-to-end control strategy that maps input images to actions. Specifically, an attention mechanism is introduced in the visual perception module to extract the necessary features for pushing and grasping actions to enhance the training strategy. Moreover, a novel pushing reward function is designed, comprising a per-object distribution function around the target and a global object distribution assessment network named PA-Net. Furthermore, an enhanced experience replay strategy is introduced to address the sparsity issue of grasp action rewards. Finally, a training environment for underwater manipulators is established, in which variations in light, water flow noise, and pressure effects are incorporated to simulate underwater work conditions more realistically. The simulation and real-world experiments demonstrate that the proposed learning strategy efficiently separates target objects and avoids inefficient pushing actions, achieving a significantly higher GS rate.</description><identifier>ISSN: 0018-9456</identifier><identifier>EISSN: 1557-9662</identifier><identifier>DOI: 10.1109/TIM.2024.3379048</identifier><identifier>CODEN: IEIMAO</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; Attention mechanism ; Cameras ; Collaboration ; collaborative actions ; Deep learning ; Deep reinforcement learning ; deep reinforcement learning (RL) ; Distribution functions ; Feature extraction ; Grasping ; Grasping (robotics) ; Machine learning ; Manipulators ; Pressure effects ; Pushing ; pushing–grasping ; reward function ; Task analysis ; Training ; underwater manipulator ; Underwater robots ; Visual perception ; Water flow</subject><ispartof>IEEE transactions on instrumentation and measurement, 2024, Vol.73, p.1-14</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c245t-a831bcde811ce85ce141a1f32d1d48cb66ce92db39b70537adb094fd971567343</cites><orcidid>0000-0002-1181-4531 ; 0000-0003-0634-6734 ; 0000-0003-1562-1443 ; 0000-0002-4671-8371 ; 0000-0002-5944-4146</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10474399$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>315,781,785,797,4025,27928,27929,27930,54763</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10474399$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Gao, Jian</creatorcontrib><creatorcontrib>Li, Yufeng</creatorcontrib><creatorcontrib>Chen, Yimin</creatorcontrib><creatorcontrib>He, Yaozhen</creatorcontrib><creatorcontrib>Guo, Jingwei</creatorcontrib><title>An Improved SAC-Based Deep Reinforcement Learning Framework for Collaborative Pushing and Grasping in Underwater Environments</title><title>IEEE transactions on instrumentation and measurement</title><addtitle>TIM</addtitle><description>Autonomous grasping is a fundamental task for underwater robots, but direct grasping for tightly stacked objects will lead to collisions and grasp failures, which requires pushing actions to separate the target object and increase grasp success (GS) rates. Hence, this article proposes a novel approach by employing an improved soft actor-critic (SAC) algorithm within a deep reinforcement learning (RL) framework for achieving collaborative pushing and grasping actions. The developed scheme employs an end-to-end control strategy that maps input images to actions. Specifically, an attention mechanism is introduced in the visual perception module to extract the necessary features for pushing and grasping actions to enhance the training strategy. Moreover, a novel pushing reward function is designed, comprising a per-object distribution function around the target and a global object distribution assessment network named PA-Net. Furthermore, an enhanced experience replay strategy is introduced to address the sparsity issue of grasp action rewards. Finally, a training environment for underwater manipulators is established, in which variations in light, water flow noise, and pressure effects are incorporated to simulate underwater work conditions more realistically. The simulation and real-world experiments demonstrate that the proposed learning strategy efficiently separates target objects and avoids inefficient pushing actions, achieving a significantly higher GS rate.</description><subject>Algorithms</subject><subject>Attention mechanism</subject><subject>Cameras</subject><subject>Collaboration</subject><subject>collaborative actions</subject><subject>Deep learning</subject><subject>Deep reinforcement learning</subject><subject>deep reinforcement learning (RL)</subject><subject>Distribution functions</subject><subject>Feature extraction</subject><subject>Grasping</subject><subject>Grasping (robotics)</subject><subject>Machine learning</subject><subject>Manipulators</subject><subject>Pressure effects</subject><subject>Pushing</subject><subject>pushing–grasping</subject><subject>reward function</subject><subject>Task analysis</subject><subject>Training</subject><subject>underwater manipulator</subject><subject>Underwater robots</subject><subject>Visual perception</subject><subject>Water flow</subject><issn>0018-9456</issn><issn>1557-9662</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkDFPwzAQhS0EEqWwMzBYYk6xYzuOx1LaUqkIBO0cOfEFAo0d7LQVA_-dRO3AdHe6773TPYSuKRlRStTdavE0iknMR4xJRXh6ggZUCBmpJIlP0YAQmkaKi-QcXYTwSQiRCZcD9Du2eFE33u3A4LfxJLrXoeseABr8CpUtnS-gBtviJWhvK_uOZ17XsHf-C3dLPHGbjc6d1221A_yyDR89o63Bc69D0w-VxWtrwO91Cx5P7a7yzvae4RKdlXoT4OpYh2g9m64mj9Hyeb6YjJdREXPRRjplNC8MpJQWkIoCKKealiw21PC0yJOkABWbnKlcEsGkNjlRvDRKUpFIxtkQ3R58u0e_txDa7NNtve1OZoxQQTijQnYUOVCFdyF4KLPGV7X2PxklWR9y1oWc9SFnx5A7yc1BUgHAP5xLzpRif-XwebU</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Gao, Jian</creator><creator>Li, Yufeng</creator><creator>Chen, Yimin</creator><creator>He, Yaozhen</creator><creator>Guo, Jingwei</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7U5</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-1181-4531</orcidid><orcidid>https://orcid.org/0000-0003-0634-6734</orcidid><orcidid>https://orcid.org/0000-0003-1562-1443</orcidid><orcidid>https://orcid.org/0000-0002-4671-8371</orcidid><orcidid>https://orcid.org/0000-0002-5944-4146</orcidid></search><sort><creationdate>2024</creationdate><title>An Improved SAC-Based Deep Reinforcement Learning Framework for Collaborative Pushing and Grasping in Underwater Environments</title><author>Gao, Jian ; Li, Yufeng ; Chen, Yimin ; He, Yaozhen ; Guo, Jingwei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c245t-a831bcde811ce85ce141a1f32d1d48cb66ce92db39b70537adb094fd971567343</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Attention mechanism</topic><topic>Cameras</topic><topic>Collaboration</topic><topic>collaborative actions</topic><topic>Deep learning</topic><topic>Deep reinforcement learning</topic><topic>deep reinforcement learning (RL)</topic><topic>Distribution functions</topic><topic>Feature extraction</topic><topic>Grasping</topic><topic>Grasping (robotics)</topic><topic>Machine learning</topic><topic>Manipulators</topic><topic>Pressure effects</topic><topic>Pushing</topic><topic>pushing–grasping</topic><topic>reward function</topic><topic>Task analysis</topic><topic>Training</topic><topic>underwater manipulator</topic><topic>Underwater robots</topic><topic>Visual perception</topic><topic>Water flow</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gao, Jian</creatorcontrib><creatorcontrib>Li, Yufeng</creatorcontrib><creatorcontrib>Chen, Yimin</creatorcontrib><creatorcontrib>He, Yaozhen</creatorcontrib><creatorcontrib>Guo, Jingwei</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on instrumentation and measurement</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gao, Jian</au><au>Li, Yufeng</au><au>Chen, Yimin</au><au>He, Yaozhen</au><au>Guo, Jingwei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An Improved SAC-Based Deep Reinforcement Learning Framework for Collaborative Pushing and Grasping in Underwater Environments</atitle><jtitle>IEEE transactions on instrumentation and measurement</jtitle><stitle>TIM</stitle><date>2024</date><risdate>2024</risdate><volume>73</volume><spage>1</spage><epage>14</epage><pages>1-14</pages><issn>0018-9456</issn><eissn>1557-9662</eissn><coden>IEIMAO</coden><abstract>Autonomous grasping is a fundamental task for underwater robots, but direct grasping for tightly stacked objects will lead to collisions and grasp failures, which requires pushing actions to separate the target object and increase grasp success (GS) rates. Hence, this article proposes a novel approach by employing an improved soft actor-critic (SAC) algorithm within a deep reinforcement learning (RL) framework for achieving collaborative pushing and grasping actions. The developed scheme employs an end-to-end control strategy that maps input images to actions. Specifically, an attention mechanism is introduced in the visual perception module to extract the necessary features for pushing and grasping actions to enhance the training strategy. Moreover, a novel pushing reward function is designed, comprising a per-object distribution function around the target and a global object distribution assessment network named PA-Net. Furthermore, an enhanced experience replay strategy is introduced to address the sparsity issue of grasp action rewards. Finally, a training environment for underwater manipulators is established, in which variations in light, water flow noise, and pressure effects are incorporated to simulate underwater work conditions more realistically. The simulation and real-world experiments demonstrate that the proposed learning strategy efficiently separates target objects and avoids inefficient pushing actions, achieving a significantly higher GS rate.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TIM.2024.3379048</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-1181-4531</orcidid><orcidid>https://orcid.org/0000-0003-0634-6734</orcidid><orcidid>https://orcid.org/0000-0003-1562-1443</orcidid><orcidid>https://orcid.org/0000-0002-4671-8371</orcidid><orcidid>https://orcid.org/0000-0002-5944-4146</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0018-9456 |
ispartof | IEEE transactions on instrumentation and measurement, 2024, Vol.73, p.1-14 |
issn | 0018-9456 1557-9662 |
language | eng |
recordid | cdi_ieee_primary_10474399 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Attention mechanism Cameras Collaboration collaborative actions Deep learning Deep reinforcement learning deep reinforcement learning (RL) Distribution functions Feature extraction Grasping Grasping (robotics) Machine learning Manipulators Pressure effects Pushing pushing–grasping reward function Task analysis Training underwater manipulator Underwater robots Visual perception Water flow |
title | An Improved SAC-Based Deep Reinforcement Learning Framework for Collaborative Pushing and Grasping in Underwater Environments |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-12T14%3A19%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20Improved%20SAC-Based%20Deep%20Reinforcement%20Learning%20Framework%20for%20Collaborative%20Pushing%20and%20Grasping%20in%20Underwater%20Environments&rft.jtitle=IEEE%20transactions%20on%20instrumentation%20and%20measurement&rft.au=Gao,%20Jian&rft.date=2024&rft.volume=73&rft.spage=1&rft.epage=14&rft.pages=1-14&rft.issn=0018-9456&rft.eissn=1557-9662&rft.coden=IEIMAO&rft_id=info:doi/10.1109/TIM.2024.3379048&rft_dat=%3Cproquest_RIE%3E3015043157%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3015043157&rft_id=info:pmid/&rft_ieee_id=10474399&rfr_iscdi=true |