Dynamic Edge Computation Offloading for Internet of Things With Energy Harvesting: A Learning Method
Mobile edge computing (MEC) has recently emerged as a promising paradigm to meet the increasing computation demands in Internet of Things (IoT). However, due to the limited computation capacity of the MEC server, an efficient computation offloading scheme, which means the IoT device decides whether...
Gespeichert in:
Veröffentlicht in: | IEEE internet of things journal 2019-06, Vol.6 (3), p.4436-4447 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 4447 |
---|---|
container_issue | 3 |
container_start_page | 4436 |
container_title | IEEE internet of things journal |
container_volume | 6 |
creator | Wei, Ziling Zhao, Baokang Su, Jinshu Lu, Xicheng |
description | Mobile edge computing (MEC) has recently emerged as a promising paradigm to meet the increasing computation demands in Internet of Things (IoT). However, due to the limited computation capacity of the MEC server, an efficient computation offloading scheme, which means the IoT device decides whether to offload the generated data to the MEC server, is needed. Considering the limited battery capacity of IoT devices, energy harvesting (EH) is introduced to enhance the lifetime of the IoT systems. However, due to the unpredictability nature of the generated data and the harvested energy, it is a challenging problem when designing an effective computation offloading scheme for the EH MEC system. To cope with this problem, we model the computation offloading process as a Markov decision process (MDP) so that no prior statistic information is needed. Then, reinforcement learning algorithms can be adopted to derive the optimal offloading policy. To address the large time complexity challenge of learning algorithms, we first introduce an after-state for each state-action pair so that the number of states in the formulated MDP is largely decreased. Then, to deal with the continuous state space challenge, a polynomial value function approximation method is introduced to accelerate the learning process. Thus, an after-state reinforcement learning algorithm for the formulated MDP is proposed to obtain the optimal offloading policy. To provide efficient instructions for real MEC systems, several analytical properties of the offloading policy are also presented. Our simulation results validate the great performance of our proposed algorithm, which significantly improves the achieved system reward under a reasonable complexity. |
doi_str_mv | 10.1109/JIOT.2018.2882783 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2244346086</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8542652</ieee_id><sourcerecordid>2244346086</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-cdfb4d8cc0fc7a6e319cfcf4238f2f84a5801635e4e6d40c9a339c8b17308d113</originalsourceid><addsrcrecordid>eNpNkE1PAjEQhhujiQT5AcZLE8-L_dpu1xtBFAyGC8ZjU_oBS6DFtpjw790NxHiayeR9ZiYPAPcYDTFG9dP7bLEcEoTFkAhBKkGvQI9QUhWMc3L9r78Fg5S2CKEWK3HNe8C8nLzaNxpOzNrCcdgfjlnlJni4cG4XlGn8GroQ4cxnG73NMDi43LTTBL-avIETb-P6BKcq_tiU2_kzHMG5VdF35IfNm2DuwI1Tu2QHl9oHn6-T5XhazBdvs_FoXmhS01xo41bMCK2R05XiluJaO-0YocIRJ5gqBcKclpZZbhjStaK01mKFK4qEwZj2weN57yGG72P7jtyGY_TtSUkIY5RxJHibwueUjiGlaJ08xGav4kliJDufsvMpO5_y4rNlHs5MY639y4uSEV4S-gvE3HEf</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2244346086</pqid></control><display><type>article</type><title>Dynamic Edge Computation Offloading for Internet of Things With Energy Harvesting: A Learning Method</title><source>IEEE/IET Electronic Library (IEL)</source><creator>Wei, Ziling ; Zhao, Baokang ; Su, Jinshu ; Lu, Xicheng</creator><creatorcontrib>Wei, Ziling ; Zhao, Baokang ; Su, Jinshu ; Lu, Xicheng</creatorcontrib><description>Mobile edge computing (MEC) has recently emerged as a promising paradigm to meet the increasing computation demands in Internet of Things (IoT). However, due to the limited computation capacity of the MEC server, an efficient computation offloading scheme, which means the IoT device decides whether to offload the generated data to the MEC server, is needed. Considering the limited battery capacity of IoT devices, energy harvesting (EH) is introduced to enhance the lifetime of the IoT systems. However, due to the unpredictability nature of the generated data and the harvested energy, it is a challenging problem when designing an effective computation offloading scheme for the EH MEC system. To cope with this problem, we model the computation offloading process as a Markov decision process (MDP) so that no prior statistic information is needed. Then, reinforcement learning algorithms can be adopted to derive the optimal offloading policy. To address the large time complexity challenge of learning algorithms, we first introduce an after-state for each state-action pair so that the number of states in the formulated MDP is largely decreased. Then, to deal with the continuous state space challenge, a polynomial value function approximation method is introduced to accelerate the learning process. Thus, an after-state reinforcement learning algorithm for the formulated MDP is proposed to obtain the optimal offloading policy. To provide efficient instructions for real MEC systems, several analytical properties of the offloading policy are also presented. Our simulation results validate the great performance of our proposed algorithm, which significantly improves the achieved system reward under a reasonable complexity.</description><identifier>ISSN: 2327-4662</identifier><identifier>EISSN: 2327-4662</identifier><identifier>DOI: 10.1109/JIOT.2018.2882783</identifier><identifier>CODEN: IITJAU</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Batteries ; Cloud computing ; Complexity ; Computation offloading ; Computational modeling ; Computer simulation ; Edge computing ; Energy harvesting ; energy harvesting (EH) ; Internet of Things ; Machine learning ; Markov analysis ; Markov chains ; Mathematical analysis ; Mobile computing ; Polynomials ; reinforcement learning ; Resource management ; Servers</subject><ispartof>IEEE internet of things journal, 2019-06, Vol.6 (3), p.4436-4447</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-cdfb4d8cc0fc7a6e319cfcf4238f2f84a5801635e4e6d40c9a339c8b17308d113</citedby><cites>FETCH-LOGICAL-c293t-cdfb4d8cc0fc7a6e319cfcf4238f2f84a5801635e4e6d40c9a339c8b17308d113</cites><orcidid>0000-0001-9200-9018 ; 0000-0002-7858-1445</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8542652$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8542652$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Wei, Ziling</creatorcontrib><creatorcontrib>Zhao, Baokang</creatorcontrib><creatorcontrib>Su, Jinshu</creatorcontrib><creatorcontrib>Lu, Xicheng</creatorcontrib><title>Dynamic Edge Computation Offloading for Internet of Things With Energy Harvesting: A Learning Method</title><title>IEEE internet of things journal</title><addtitle>JIoT</addtitle><description>Mobile edge computing (MEC) has recently emerged as a promising paradigm to meet the increasing computation demands in Internet of Things (IoT). However, due to the limited computation capacity of the MEC server, an efficient computation offloading scheme, which means the IoT device decides whether to offload the generated data to the MEC server, is needed. Considering the limited battery capacity of IoT devices, energy harvesting (EH) is introduced to enhance the lifetime of the IoT systems. However, due to the unpredictability nature of the generated data and the harvested energy, it is a challenging problem when designing an effective computation offloading scheme for the EH MEC system. To cope with this problem, we model the computation offloading process as a Markov decision process (MDP) so that no prior statistic information is needed. Then, reinforcement learning algorithms can be adopted to derive the optimal offloading policy. To address the large time complexity challenge of learning algorithms, we first introduce an after-state for each state-action pair so that the number of states in the formulated MDP is largely decreased. Then, to deal with the continuous state space challenge, a polynomial value function approximation method is introduced to accelerate the learning process. Thus, an after-state reinforcement learning algorithm for the formulated MDP is proposed to obtain the optimal offloading policy. To provide efficient instructions for real MEC systems, several analytical properties of the offloading policy are also presented. Our simulation results validate the great performance of our proposed algorithm, which significantly improves the achieved system reward under a reasonable complexity.</description><subject>Algorithms</subject><subject>Batteries</subject><subject>Cloud computing</subject><subject>Complexity</subject><subject>Computation offloading</subject><subject>Computational modeling</subject><subject>Computer simulation</subject><subject>Edge computing</subject><subject>Energy harvesting</subject><subject>energy harvesting (EH)</subject><subject>Internet of Things</subject><subject>Machine learning</subject><subject>Markov analysis</subject><subject>Markov chains</subject><subject>Mathematical analysis</subject><subject>Mobile computing</subject><subject>Polynomials</subject><subject>reinforcement learning</subject><subject>Resource management</subject><subject>Servers</subject><issn>2327-4662</issn><issn>2327-4662</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkE1PAjEQhhujiQT5AcZLE8-L_dpu1xtBFAyGC8ZjU_oBS6DFtpjw790NxHiayeR9ZiYPAPcYDTFG9dP7bLEcEoTFkAhBKkGvQI9QUhWMc3L9r78Fg5S2CKEWK3HNe8C8nLzaNxpOzNrCcdgfjlnlJni4cG4XlGn8GroQ4cxnG73NMDi43LTTBL-avIETb-P6BKcq_tiU2_kzHMG5VdF35IfNm2DuwI1Tu2QHl9oHn6-T5XhazBdvs_FoXmhS01xo41bMCK2R05XiluJaO-0YocIRJ5gqBcKclpZZbhjStaK01mKFK4qEwZj2weN57yGG72P7jtyGY_TtSUkIY5RxJHibwueUjiGlaJ08xGav4kliJDufsvMpO5_y4rNlHs5MY639y4uSEV4S-gvE3HEf</recordid><startdate>20190601</startdate><enddate>20190601</enddate><creator>Wei, Ziling</creator><creator>Zhao, Baokang</creator><creator>Su, Jinshu</creator><creator>Lu, Xicheng</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-9200-9018</orcidid><orcidid>https://orcid.org/0000-0002-7858-1445</orcidid></search><sort><creationdate>20190601</creationdate><title>Dynamic Edge Computation Offloading for Internet of Things With Energy Harvesting: A Learning Method</title><author>Wei, Ziling ; Zhao, Baokang ; Su, Jinshu ; Lu, Xicheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-cdfb4d8cc0fc7a6e319cfcf4238f2f84a5801635e4e6d40c9a339c8b17308d113</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Algorithms</topic><topic>Batteries</topic><topic>Cloud computing</topic><topic>Complexity</topic><topic>Computation offloading</topic><topic>Computational modeling</topic><topic>Computer simulation</topic><topic>Edge computing</topic><topic>Energy harvesting</topic><topic>energy harvesting (EH)</topic><topic>Internet of Things</topic><topic>Machine learning</topic><topic>Markov analysis</topic><topic>Markov chains</topic><topic>Mathematical analysis</topic><topic>Mobile computing</topic><topic>Polynomials</topic><topic>reinforcement learning</topic><topic>Resource management</topic><topic>Servers</topic><toplevel>online_resources</toplevel><creatorcontrib>Wei, Ziling</creatorcontrib><creatorcontrib>Zhao, Baokang</creatorcontrib><creatorcontrib>Su, Jinshu</creatorcontrib><creatorcontrib>Lu, Xicheng</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005–Present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE/IET Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE internet of things journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wei, Ziling</au><au>Zhao, Baokang</au><au>Su, Jinshu</au><au>Lu, Xicheng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Dynamic Edge Computation Offloading for Internet of Things With Energy Harvesting: A Learning Method</atitle><jtitle>IEEE internet of things journal</jtitle><stitle>JIoT</stitle><date>2019-06-01</date><risdate>2019</risdate><volume>6</volume><issue>3</issue><spage>4436</spage><epage>4447</epage><pages>4436-4447</pages><issn>2327-4662</issn><eissn>2327-4662</eissn><coden>IITJAU</coden><abstract>Mobile edge computing (MEC) has recently emerged as a promising paradigm to meet the increasing computation demands in Internet of Things (IoT). However, due to the limited computation capacity of the MEC server, an efficient computation offloading scheme, which means the IoT device decides whether to offload the generated data to the MEC server, is needed. Considering the limited battery capacity of IoT devices, energy harvesting (EH) is introduced to enhance the lifetime of the IoT systems. However, due to the unpredictability nature of the generated data and the harvested energy, it is a challenging problem when designing an effective computation offloading scheme for the EH MEC system. To cope with this problem, we model the computation offloading process as a Markov decision process (MDP) so that no prior statistic information is needed. Then, reinforcement learning algorithms can be adopted to derive the optimal offloading policy. To address the large time complexity challenge of learning algorithms, we first introduce an after-state for each state-action pair so that the number of states in the formulated MDP is largely decreased. Then, to deal with the continuous state space challenge, a polynomial value function approximation method is introduced to accelerate the learning process. Thus, an after-state reinforcement learning algorithm for the formulated MDP is proposed to obtain the optimal offloading policy. To provide efficient instructions for real MEC systems, several analytical properties of the offloading policy are also presented. Our simulation results validate the great performance of our proposed algorithm, which significantly improves the achieved system reward under a reasonable complexity.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/JIOT.2018.2882783</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-9200-9018</orcidid><orcidid>https://orcid.org/0000-0002-7858-1445</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2327-4662 |
ispartof | IEEE internet of things journal, 2019-06, Vol.6 (3), p.4436-4447 |
issn | 2327-4662 2327-4662 |
language | eng |
recordid | cdi_proquest_journals_2244346086 |
source | IEEE/IET Electronic Library (IEL) |
subjects | Algorithms Batteries Cloud computing Complexity Computation offloading Computational modeling Computer simulation Edge computing Energy harvesting energy harvesting (EH) Internet of Things Machine learning Markov analysis Markov chains Mathematical analysis Mobile computing Polynomials reinforcement learning Resource management Servers |
title | Dynamic Edge Computation Offloading for Internet of Things With Energy Harvesting: A Learning Method |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T14%3A03%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Dynamic%20Edge%20Computation%20Offloading%20for%20Internet%20of%20Things%20With%20Energy%20Harvesting:%20A%20Learning%20Method&rft.jtitle=IEEE%20internet%20of%20things%20journal&rft.au=Wei,%20Ziling&rft.date=2019-06-01&rft.volume=6&rft.issue=3&rft.spage=4436&rft.epage=4447&rft.pages=4436-4447&rft.issn=2327-4662&rft.eissn=2327-4662&rft.coden=IITJAU&rft_id=info:doi/10.1109/JIOT.2018.2882783&rft_dat=%3Cproquest_RIE%3E2244346086%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2244346086&rft_id=info:pmid/&rft_ieee_id=8542652&rfr_iscdi=true |