Multiagent Deep Reinforcement Learning for Joint Multichannel Access and Task Offloading of Mobile-Edge Computing in Industry 4.0
Industry 4.0 aims to create a modern industrial system by introducing technologies, such as cloud computing, intelligent robotics, and wireless sensor networks. In this article, we consider the multichannel access and task offloading problem in mobile-edge computing (MEC)-enabled industry 4.0 and de...
Gespeichert in:
Veröffentlicht in: | IEEE internet of things journal 2020-07, Vol.7 (7), p.6201-6213 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 6213 |
---|---|
container_issue | 7 |
container_start_page | 6201 |
container_title | IEEE internet of things journal |
container_volume | 7 |
creator | Cao, Zilong Zhou, Pan Li, Ruixuan Huang, Siqi Wu, Dapeng |
description | Industry 4.0 aims to create a modern industrial system by introducing technologies, such as cloud computing, intelligent robotics, and wireless sensor networks. In this article, we consider the multichannel access and task offloading problem in mobile-edge computing (MEC)-enabled industry 4.0 and describe this problem in multiagent environment. To solve this problem, we propose a novel multiagent deep reinforcement learning (MADRL) scheme. The solution enables edge devices (EDs) to cooperate with each other, which can significantly reduce the computation delay and improve the channel access success rate. Extensive simulation results with different system parameters reveal that the proposed scheme could reduce computation delay by 33.38% and increase the channel access success rate by 14.88% and channel utilization by 3.24% compared to the traditional single-agent reinforcement learning method. |
doi_str_mv | 10.1109/JIOT.2020.2968951 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_webofscience_primary_000548817900047CitationCount</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9037194</ieee_id><sourcerecordid>2424189423</sourcerecordid><originalsourceid>FETCH-LOGICAL-c359t-c51793d107e74ac41dce77c296afbaa2982b3431ae926c04fb2cc0290b82483f3</originalsourceid><addsrcrecordid>eNqNkMtqGzEYhYfSQIybBwjZCLos4-o2Fy3NNE0dbAzBWQ8azS9X6VhyJQ0hy755NHVIssxKP0fn0-XLskuCF4Rg8f12td0tKKZ4QUVZi4J8ymaU0SrnZUk_v5vPs4sQHjDGCSuIKGfZv804RCP3YCP6AXBEd2Csdl7BYYrWIL01do9ShG6dSdF_QP2W1sKAlkpBCEjaHu1k-IO2Wg9O9hPhNNq4zgyQX_d7QI07HMc4bRiLVrYfQ_RPiC_wl-xMyyHAxcs6z-5_Xu-aX_l6e7NqlutcsULEXBWkEqwnuIKKS8VJr6CqVPqw1J2UVNS0Y5wRCYKWCnPdUaUwFbirKa-ZZvPs6-nco3d_RwixfXCjt-nKlnLKSS04ZalFTi3lXQgedHv05iD9U0twO8luJ9ntJLt9kZ2YbyfmETqngzJgFbxyyXbB6zq9Pk28Su364-3GRBmNs40bbUzo1Qk1AG-IwKwigrNnMsSbww</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2424189423</pqid></control><display><type>article</type><title>Multiagent Deep Reinforcement Learning for Joint Multichannel Access and Task Offloading of Mobile-Edge Computing in Industry 4.0</title><source>IEEE Electronic Library (IEL)</source><creator>Cao, Zilong ; Zhou, Pan ; Li, Ruixuan ; Huang, Siqi ; Wu, Dapeng</creator><creatorcontrib>Cao, Zilong ; Zhou, Pan ; Li, Ruixuan ; Huang, Siqi ; Wu, Dapeng</creatorcontrib><description>Industry 4.0 aims to create a modern industrial system by introducing technologies, such as cloud computing, intelligent robotics, and wireless sensor networks. In this article, we consider the multichannel access and task offloading problem in mobile-edge computing (MEC)-enabled industry 4.0 and describe this problem in multiagent environment. To solve this problem, we propose a novel multiagent deep reinforcement learning (MADRL) scheme. The solution enables edge devices (EDs) to cooperate with each other, which can significantly reduce the computation delay and improve the channel access success rate. Extensive simulation results with different system parameters reveal that the proposed scheme could reduce computation delay by 33.38% and increase the channel access success rate by 14.88% and channel utilization by 3.24% compared to the traditional single-agent reinforcement learning method.</description><identifier>ISSN: 2327-4662</identifier><identifier>EISSN: 2327-4662</identifier><identifier>DOI: 10.1109/JIOT.2020.2968951</identifier><identifier>CODEN: IITJAU</identifier><language>eng</language><publisher>PISCATAWAY: IEEE</publisher><subject>Cloud computing ; Computation offloading ; Computer Science ; Computer Science, Information Systems ; Computer simulation ; Deep learning ; Edge computing ; Engineering ; Engineering, Electrical & Electronic ; Heuristic algorithms ; Industries ; Industry 4.0 ; Machine-to-machine (M2M) communications ; Mobile computing ; mobile-edge computing (MEC) ; multiagent deep reinforcement learning (MADRL) ; Multiagent systems ; Reinforcement learning ; Robotics ; Science & Technology ; Servers ; Task analysis ; task offloading ; Technology ; Telecommunications ; Wireless sensor networks</subject><ispartof>IEEE internet of things journal, 2020-07, Vol.7 (7), p.6201-6213</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><woscitedreferencessubscribed>true</woscitedreferencessubscribed><woscitedreferencescount>110</woscitedreferencescount><woscitedreferencesoriginalsourcerecordid>wos000548817900047</woscitedreferencesoriginalsourcerecordid><citedby>FETCH-LOGICAL-c359t-c51793d107e74ac41dce77c296afbaa2982b3431ae926c04fb2cc0290b82483f3</citedby><cites>FETCH-LOGICAL-c359t-c51793d107e74ac41dce77c296afbaa2982b3431ae926c04fb2cc0290b82483f3</cites><orcidid>0000-0002-5952-4998 ; 0000-0002-8629-4622 ; 0000-0002-7791-5511 ; 0000-0003-1755-0183</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9037194$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>315,781,785,797,27929,27930,28253,54763</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9037194$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Cao, Zilong</creatorcontrib><creatorcontrib>Zhou, Pan</creatorcontrib><creatorcontrib>Li, Ruixuan</creatorcontrib><creatorcontrib>Huang, Siqi</creatorcontrib><creatorcontrib>Wu, Dapeng</creatorcontrib><title>Multiagent Deep Reinforcement Learning for Joint Multichannel Access and Task Offloading of Mobile-Edge Computing in Industry 4.0</title><title>IEEE internet of things journal</title><addtitle>JIoT</addtitle><addtitle>IEEE INTERNET THINGS</addtitle><description>Industry 4.0 aims to create a modern industrial system by introducing technologies, such as cloud computing, intelligent robotics, and wireless sensor networks. In this article, we consider the multichannel access and task offloading problem in mobile-edge computing (MEC)-enabled industry 4.0 and describe this problem in multiagent environment. To solve this problem, we propose a novel multiagent deep reinforcement learning (MADRL) scheme. The solution enables edge devices (EDs) to cooperate with each other, which can significantly reduce the computation delay and improve the channel access success rate. Extensive simulation results with different system parameters reveal that the proposed scheme could reduce computation delay by 33.38% and increase the channel access success rate by 14.88% and channel utilization by 3.24% compared to the traditional single-agent reinforcement learning method.</description><subject>Cloud computing</subject><subject>Computation offloading</subject><subject>Computer Science</subject><subject>Computer Science, Information Systems</subject><subject>Computer simulation</subject><subject>Deep learning</subject><subject>Edge computing</subject><subject>Engineering</subject><subject>Engineering, Electrical & Electronic</subject><subject>Heuristic algorithms</subject><subject>Industries</subject><subject>Industry 4.0</subject><subject>Machine-to-machine (M2M) communications</subject><subject>Mobile computing</subject><subject>mobile-edge computing (MEC)</subject><subject>multiagent deep reinforcement learning (MADRL)</subject><subject>Multiagent systems</subject><subject>Reinforcement learning</subject><subject>Robotics</subject><subject>Science & Technology</subject><subject>Servers</subject><subject>Task analysis</subject><subject>task offloading</subject><subject>Technology</subject><subject>Telecommunications</subject><subject>Wireless sensor networks</subject><issn>2327-4662</issn><issn>2327-4662</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>AOWDO</sourceid><recordid>eNqNkMtqGzEYhYfSQIybBwjZCLos4-o2Fy3NNE0dbAzBWQ8azS9X6VhyJQ0hy755NHVIssxKP0fn0-XLskuCF4Rg8f12td0tKKZ4QUVZi4J8ymaU0SrnZUk_v5vPs4sQHjDGCSuIKGfZv804RCP3YCP6AXBEd2Csdl7BYYrWIL01do9ShG6dSdF_QP2W1sKAlkpBCEjaHu1k-IO2Wg9O9hPhNNq4zgyQX_d7QI07HMc4bRiLVrYfQ_RPiC_wl-xMyyHAxcs6z-5_Xu-aX_l6e7NqlutcsULEXBWkEqwnuIKKS8VJr6CqVPqw1J2UVNS0Y5wRCYKWCnPdUaUwFbirKa-ZZvPs6-nco3d_RwixfXCjt-nKlnLKSS04ZalFTi3lXQgedHv05iD9U0twO8luJ9ntJLt9kZ2YbyfmETqngzJgFbxyyXbB6zq9Pk28Su364-3GRBmNs40bbUzo1Qk1AG-IwKwigrNnMsSbww</recordid><startdate>20200701</startdate><enddate>20200701</enddate><creator>Cao, Zilong</creator><creator>Zhou, Pan</creator><creator>Li, Ruixuan</creator><creator>Huang, Siqi</creator><creator>Wu, Dapeng</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AOWDO</scope><scope>BLEPL</scope><scope>DTL</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-5952-4998</orcidid><orcidid>https://orcid.org/0000-0002-8629-4622</orcidid><orcidid>https://orcid.org/0000-0002-7791-5511</orcidid><orcidid>https://orcid.org/0000-0003-1755-0183</orcidid></search><sort><creationdate>20200701</creationdate><title>Multiagent Deep Reinforcement Learning for Joint Multichannel Access and Task Offloading of Mobile-Edge Computing in Industry 4.0</title><author>Cao, Zilong ; Zhou, Pan ; Li, Ruixuan ; Huang, Siqi ; Wu, Dapeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c359t-c51793d107e74ac41dce77c296afbaa2982b3431ae926c04fb2cc0290b82483f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Cloud computing</topic><topic>Computation offloading</topic><topic>Computer Science</topic><topic>Computer Science, Information Systems</topic><topic>Computer simulation</topic><topic>Deep learning</topic><topic>Edge computing</topic><topic>Engineering</topic><topic>Engineering, Electrical & Electronic</topic><topic>Heuristic algorithms</topic><topic>Industries</topic><topic>Industry 4.0</topic><topic>Machine-to-machine (M2M) communications</topic><topic>Mobile computing</topic><topic>mobile-edge computing (MEC)</topic><topic>multiagent deep reinforcement learning (MADRL)</topic><topic>Multiagent systems</topic><topic>Reinforcement learning</topic><topic>Robotics</topic><topic>Science & Technology</topic><topic>Servers</topic><topic>Task analysis</topic><topic>task offloading</topic><topic>Technology</topic><topic>Telecommunications</topic><topic>Wireless sensor networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Cao, Zilong</creatorcontrib><creatorcontrib>Zhou, Pan</creatorcontrib><creatorcontrib>Li, Ruixuan</creatorcontrib><creatorcontrib>Huang, Siqi</creatorcontrib><creatorcontrib>Wu, Dapeng</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Web of Science - Science Citation Index Expanded - 2020</collection><collection>Web of Science Core Collection</collection><collection>Science Citation Index Expanded</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE internet of things journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Cao, Zilong</au><au>Zhou, Pan</au><au>Li, Ruixuan</au><au>Huang, Siqi</au><au>Wu, Dapeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multiagent Deep Reinforcement Learning for Joint Multichannel Access and Task Offloading of Mobile-Edge Computing in Industry 4.0</atitle><jtitle>IEEE internet of things journal</jtitle><stitle>JIoT</stitle><stitle>IEEE INTERNET THINGS</stitle><date>2020-07-01</date><risdate>2020</risdate><volume>7</volume><issue>7</issue><spage>6201</spage><epage>6213</epage><pages>6201-6213</pages><issn>2327-4662</issn><eissn>2327-4662</eissn><coden>IITJAU</coden><abstract>Industry 4.0 aims to create a modern industrial system by introducing technologies, such as cloud computing, intelligent robotics, and wireless sensor networks. In this article, we consider the multichannel access and task offloading problem in mobile-edge computing (MEC)-enabled industry 4.0 and describe this problem in multiagent environment. To solve this problem, we propose a novel multiagent deep reinforcement learning (MADRL) scheme. The solution enables edge devices (EDs) to cooperate with each other, which can significantly reduce the computation delay and improve the channel access success rate. Extensive simulation results with different system parameters reveal that the proposed scheme could reduce computation delay by 33.38% and increase the channel access success rate by 14.88% and channel utilization by 3.24% compared to the traditional single-agent reinforcement learning method.</abstract><cop>PISCATAWAY</cop><pub>IEEE</pub><doi>10.1109/JIOT.2020.2968951</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-5952-4998</orcidid><orcidid>https://orcid.org/0000-0002-8629-4622</orcidid><orcidid>https://orcid.org/0000-0002-7791-5511</orcidid><orcidid>https://orcid.org/0000-0003-1755-0183</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2327-4662 |
ispartof | IEEE internet of things journal, 2020-07, Vol.7 (7), p.6201-6213 |
issn | 2327-4662 2327-4662 |
language | eng |
recordid | cdi_webofscience_primary_000548817900047CitationCount |
source | IEEE Electronic Library (IEL) |
subjects | Cloud computing Computation offloading Computer Science Computer Science, Information Systems Computer simulation Deep learning Edge computing Engineering Engineering, Electrical & Electronic Heuristic algorithms Industries Industry 4.0 Machine-to-machine (M2M) communications Mobile computing mobile-edge computing (MEC) multiagent deep reinforcement learning (MADRL) Multiagent systems Reinforcement learning Robotics Science & Technology Servers Task analysis task offloading Technology Telecommunications Wireless sensor networks |
title | Multiagent Deep Reinforcement Learning for Joint Multichannel Access and Task Offloading of Mobile-Edge Computing in Industry 4.0 |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-16T05%3A53%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multiagent%20Deep%20Reinforcement%20Learning%20for%20Joint%20Multichannel%20Access%20and%20Task%20Offloading%20of%20Mobile-Edge%20Computing%20in%20Industry%204.0&rft.jtitle=IEEE%20internet%20of%20things%20journal&rft.au=Cao,%20Zilong&rft.date=2020-07-01&rft.volume=7&rft.issue=7&rft.spage=6201&rft.epage=6213&rft.pages=6201-6213&rft.issn=2327-4662&rft.eissn=2327-4662&rft.coden=IITJAU&rft_id=info:doi/10.1109/JIOT.2020.2968951&rft_dat=%3Cproquest_RIE%3E2424189423%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2424189423&rft_id=info:pmid/&rft_ieee_id=9037194&rfr_iscdi=true |