A Heuristic Deep Q Learning for Offloading in Edge Devices in 5 g Networks

The 5G Wireless Environments have huge data transmission; therefore, there is an increase in the requests for computational tasks from Intelligent Wireless Mobile Nodes. This computational capability leads to high reliability and low latency in a 5G network. Mobile edge computing (MEC) allows end sy...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of grid computing 2023-09, Vol.21 (3), p.37, Article 37
Hauptverfasser: Dong, YanRu, Alwakeel, Ahmed M., Alwakeel, Mohammed M., Alharbi, Lubna A., Althubiti, Sara A
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 3
container_start_page 37
container_title Journal of grid computing
container_volume 21
creator Dong, YanRu
Alwakeel, Ahmed M.
Alwakeel, Mohammed M.
Alharbi, Lubna A.
Althubiti, Sara A
description The 5G Wireless Environments have huge data transmission; therefore, there is an increase in the requests for computational tasks from Intelligent Wireless Mobile Nodes. This computational capability leads to high reliability and low latency in a 5G network. Mobile edge computing (MEC) allows end systems with constrained computing capacity to handle computationally demanding tasks and offer accurate alternatives. The MEC server’s physical position is nearer to WN than other servers, which satisfies the demands for low latency and excellent dependability. To overcome the issues of existing work, such as low latency, offloading and task scheduling, the proposed method provides efficient results. In this work for job scheduling, Multi-agent Collaborative Deep Reinforcement Learning based Scheduling Algorithm with a double deep Q network (DQN) is used in the MEC system. To minimize the total Latency in Wireless Nodes, it uses Karush-Kuhn-Tucker (KKT) approach. This approach provides the optimum solutions to the partial and complete offloading tasks. The double deep Q network (DQN) reduces energy consumption and offers better convergence Between the Wireless Nodes. Compared to traditional algorithms like DeMDRL and BiDRL, the proposed MDRL-DDQN demonstrates superior performance.
doi_str_mv 10.1007/s10723-023-09667-w
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2832897516</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2832897516</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-58cbb5fdaf243601aa695e50e6a238e88a3f39258f2911d81cf9f4dccaf7ca5f3</originalsourceid><addsrcrecordid>eNp9UMtOwzAQtBBIlMIPcLLE2eBHHdvHqhSKqKiQ4Gy5zjpKKUmxUyL-hm_hy0gIEjcOo92RZmZXg9A5o5eMUnWVGFVcENrDZJki7QEaMak4MUxPDn92SpRW4hidpLShlEtN-QjdT_EC9rFMTenxNcAOP-IluFiVVYFDHfEqhG3t8p6WFZ7nBXSy99JD6rn8-izwAzRtHV_SKToKbpvg7HeO0fPN_Gm2IMvV7d1suiReMNMQqf16LUPuAp-IjDLnMiNBUsgcFxq0diII0_0XuGEs18wHEya59y4o72QQY3Qx5O5i_baH1NhNvY9Vd9JyLbg2SrKsU_FB5WOdUoRgd7F8dfHDMmr70uxQmqU9-tJs25nEYEqduCog_kX_4_oGBQdvkQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2832897516</pqid></control><display><type>article</type><title>A Heuristic Deep Q Learning for Offloading in Edge Devices in 5 g Networks</title><source>SpringerLink Journals</source><creator>Dong, YanRu ; Alwakeel, Ahmed M. ; Alwakeel, Mohammed M. ; Alharbi, Lubna A. ; Althubiti, Sara A</creator><creatorcontrib>Dong, YanRu ; Alwakeel, Ahmed M. ; Alwakeel, Mohammed M. ; Alharbi, Lubna A. ; Althubiti, Sara A</creatorcontrib><description>The 5G Wireless Environments have huge data transmission; therefore, there is an increase in the requests for computational tasks from Intelligent Wireless Mobile Nodes. This computational capability leads to high reliability and low latency in a 5G network. Mobile edge computing (MEC) allows end systems with constrained computing capacity to handle computationally demanding tasks and offer accurate alternatives. The MEC server’s physical position is nearer to WN than other servers, which satisfies the demands for low latency and excellent dependability. To overcome the issues of existing work, such as low latency, offloading and task scheduling, the proposed method provides efficient results. In this work for job scheduling, Multi-agent Collaborative Deep Reinforcement Learning based Scheduling Algorithm with a double deep Q network (DQN) is used in the MEC system. To minimize the total Latency in Wireless Nodes, it uses Karush-Kuhn-Tucker (KKT) approach. This approach provides the optimum solutions to the partial and complete offloading tasks. The double deep Q network (DQN) reduces energy consumption and offers better convergence Between the Wireless Nodes. Compared to traditional algorithms like DeMDRL and BiDRL, the proposed MDRL-DDQN demonstrates superior performance.</description><identifier>ISSN: 1570-7873</identifier><identifier>EISSN: 1572-9184</identifier><identifier>DOI: 10.1007/s10723-023-09667-w</identifier><language>eng</language><publisher>Dordrecht: Springer Netherlands</publisher><subject>5G mobile communication ; Algorithms ; Computation offloading ; Computer Science ; Data transmission ; Deep learning ; Edge computing ; Energy consumption ; Machine learning ; Management of Computing and Information Systems ; Mobile computing ; Multiagent systems ; Network latency ; Nodes ; Processor Architectures ; Task scheduling ; User Interfaces and Human Computer Interaction ; Wireless networks</subject><ispartof>Journal of grid computing, 2023-09, Vol.21 (3), p.37, Article 37</ispartof><rights>The Author(s), under exclusive licence to Springer Nature B.V. 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-58cbb5fdaf243601aa695e50e6a238e88a3f39258f2911d81cf9f4dccaf7ca5f3</citedby><cites>FETCH-LOGICAL-c319t-58cbb5fdaf243601aa695e50e6a238e88a3f39258f2911d81cf9f4dccaf7ca5f3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10723-023-09667-w$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10723-023-09667-w$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Dong, YanRu</creatorcontrib><creatorcontrib>Alwakeel, Ahmed M.</creatorcontrib><creatorcontrib>Alwakeel, Mohammed M.</creatorcontrib><creatorcontrib>Alharbi, Lubna A.</creatorcontrib><creatorcontrib>Althubiti, Sara A</creatorcontrib><title>A Heuristic Deep Q Learning for Offloading in Edge Devices in 5 g Networks</title><title>Journal of grid computing</title><addtitle>J Grid Computing</addtitle><description>The 5G Wireless Environments have huge data transmission; therefore, there is an increase in the requests for computational tasks from Intelligent Wireless Mobile Nodes. This computational capability leads to high reliability and low latency in a 5G network. Mobile edge computing (MEC) allows end systems with constrained computing capacity to handle computationally demanding tasks and offer accurate alternatives. The MEC server’s physical position is nearer to WN than other servers, which satisfies the demands for low latency and excellent dependability. To overcome the issues of existing work, such as low latency, offloading and task scheduling, the proposed method provides efficient results. In this work for job scheduling, Multi-agent Collaborative Deep Reinforcement Learning based Scheduling Algorithm with a double deep Q network (DQN) is used in the MEC system. To minimize the total Latency in Wireless Nodes, it uses Karush-Kuhn-Tucker (KKT) approach. This approach provides the optimum solutions to the partial and complete offloading tasks. The double deep Q network (DQN) reduces energy consumption and offers better convergence Between the Wireless Nodes. Compared to traditional algorithms like DeMDRL and BiDRL, the proposed MDRL-DDQN demonstrates superior performance.</description><subject>5G mobile communication</subject><subject>Algorithms</subject><subject>Computation offloading</subject><subject>Computer Science</subject><subject>Data transmission</subject><subject>Deep learning</subject><subject>Edge computing</subject><subject>Energy consumption</subject><subject>Machine learning</subject><subject>Management of Computing and Information Systems</subject><subject>Mobile computing</subject><subject>Multiagent systems</subject><subject>Network latency</subject><subject>Nodes</subject><subject>Processor Architectures</subject><subject>Task scheduling</subject><subject>User Interfaces and Human Computer Interaction</subject><subject>Wireless networks</subject><issn>1570-7873</issn><issn>1572-9184</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNp9UMtOwzAQtBBIlMIPcLLE2eBHHdvHqhSKqKiQ4Gy5zjpKKUmxUyL-hm_hy0gIEjcOo92RZmZXg9A5o5eMUnWVGFVcENrDZJki7QEaMak4MUxPDn92SpRW4hidpLShlEtN-QjdT_EC9rFMTenxNcAOP-IluFiVVYFDHfEqhG3t8p6WFZ7nBXSy99JD6rn8-izwAzRtHV_SKToKbpvg7HeO0fPN_Gm2IMvV7d1suiReMNMQqf16LUPuAp-IjDLnMiNBUsgcFxq0diII0_0XuGEs18wHEya59y4o72QQY3Qx5O5i_baH1NhNvY9Vd9JyLbg2SrKsU_FB5WOdUoRgd7F8dfHDMmr70uxQmqU9-tJs25nEYEqduCog_kX_4_oGBQdvkQ</recordid><startdate>20230901</startdate><enddate>20230901</enddate><creator>Dong, YanRu</creator><creator>Alwakeel, Ahmed M.</creator><creator>Alwakeel, Mohammed M.</creator><creator>Alharbi, Lubna A.</creator><creator>Althubiti, Sara A</creator><general>Springer Netherlands</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope></search><sort><creationdate>20230901</creationdate><title>A Heuristic Deep Q Learning for Offloading in Edge Devices in 5 g Networks</title><author>Dong, YanRu ; Alwakeel, Ahmed M. ; Alwakeel, Mohammed M. ; Alharbi, Lubna A. ; Althubiti, Sara A</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-58cbb5fdaf243601aa695e50e6a238e88a3f39258f2911d81cf9f4dccaf7ca5f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>5G mobile communication</topic><topic>Algorithms</topic><topic>Computation offloading</topic><topic>Computer Science</topic><topic>Data transmission</topic><topic>Deep learning</topic><topic>Edge computing</topic><topic>Energy consumption</topic><topic>Machine learning</topic><topic>Management of Computing and Information Systems</topic><topic>Mobile computing</topic><topic>Multiagent systems</topic><topic>Network latency</topic><topic>Nodes</topic><topic>Processor Architectures</topic><topic>Task scheduling</topic><topic>User Interfaces and Human Computer Interaction</topic><topic>Wireless networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Dong, YanRu</creatorcontrib><creatorcontrib>Alwakeel, Ahmed M.</creatorcontrib><creatorcontrib>Alwakeel, Mohammed M.</creatorcontrib><creatorcontrib>Alharbi, Lubna A.</creatorcontrib><creatorcontrib>Althubiti, Sara A</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><jtitle>Journal of grid computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Dong, YanRu</au><au>Alwakeel, Ahmed M.</au><au>Alwakeel, Mohammed M.</au><au>Alharbi, Lubna A.</au><au>Althubiti, Sara A</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Heuristic Deep Q Learning for Offloading in Edge Devices in 5 g Networks</atitle><jtitle>Journal of grid computing</jtitle><stitle>J Grid Computing</stitle><date>2023-09-01</date><risdate>2023</risdate><volume>21</volume><issue>3</issue><spage>37</spage><pages>37-</pages><artnum>37</artnum><issn>1570-7873</issn><eissn>1572-9184</eissn><abstract>The 5G Wireless Environments have huge data transmission; therefore, there is an increase in the requests for computational tasks from Intelligent Wireless Mobile Nodes. This computational capability leads to high reliability and low latency in a 5G network. Mobile edge computing (MEC) allows end systems with constrained computing capacity to handle computationally demanding tasks and offer accurate alternatives. The MEC server’s physical position is nearer to WN than other servers, which satisfies the demands for low latency and excellent dependability. To overcome the issues of existing work, such as low latency, offloading and task scheduling, the proposed method provides efficient results. In this work for job scheduling, Multi-agent Collaborative Deep Reinforcement Learning based Scheduling Algorithm with a double deep Q network (DQN) is used in the MEC system. To minimize the total Latency in Wireless Nodes, it uses Karush-Kuhn-Tucker (KKT) approach. This approach provides the optimum solutions to the partial and complete offloading tasks. The double deep Q network (DQN) reduces energy consumption and offers better convergence Between the Wireless Nodes. Compared to traditional algorithms like DeMDRL and BiDRL, the proposed MDRL-DDQN demonstrates superior performance.</abstract><cop>Dordrecht</cop><pub>Springer Netherlands</pub><doi>10.1007/s10723-023-09667-w</doi></addata></record>
fulltext fulltext
identifier ISSN: 1570-7873
ispartof Journal of grid computing, 2023-09, Vol.21 (3), p.37, Article 37
issn 1570-7873
1572-9184
language eng
recordid cdi_proquest_journals_2832897516
source SpringerLink Journals
subjects 5G mobile communication
Algorithms
Computation offloading
Computer Science
Data transmission
Deep learning
Edge computing
Energy consumption
Machine learning
Management of Computing and Information Systems
Mobile computing
Multiagent systems
Network latency
Nodes
Processor Architectures
Task scheduling
User Interfaces and Human Computer Interaction
Wireless networks
title A Heuristic Deep Q Learning for Offloading in Edge Devices in 5 g Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T12%3A40%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Heuristic%20Deep%20Q%20Learning%20for%20Offloading%20in%20Edge%20Devices%20in%205%C2%A0g%20Networks&rft.jtitle=Journal%20of%20grid%20computing&rft.au=Dong,%20YanRu&rft.date=2023-09-01&rft.volume=21&rft.issue=3&rft.spage=37&rft.pages=37-&rft.artnum=37&rft.issn=1570-7873&rft.eissn=1572-9184&rft_id=info:doi/10.1007/s10723-023-09667-w&rft_dat=%3Cproquest_cross%3E2832897516%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2832897516&rft_id=info:pmid/&rfr_iscdi=true