Adaptive Duty Cycling in Sensor Networks With Energy Harvesting Using Continuous-Time Markov Chain and Fluid Models

The dynamic and unpredictable nature of energy harvesting sources available for wireless sensor networks, and the time variation in network statistics like packet transmission rates and link qualities, necessitate the use of adaptive duty cycling techniques. Such adaptive control allows sensor nodes...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE journal on selected areas in communications 2015-12, Vol.33 (12), p.2687-2700
Hauptverfasser: Chan, Wai Hong Ronald, Zhang, Pengfei, Nevat, Ido, Nagarajan, Sai Ganesh, Valera, Alvin C., Tan, Hwee-Xian, Gautam, Natarajan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2700
container_issue 12
container_start_page 2687
container_title IEEE journal on selected areas in communications
container_volume 33
creator Chan, Wai Hong Ronald
Zhang, Pengfei
Nevat, Ido
Nagarajan, Sai Ganesh
Valera, Alvin C.
Tan, Hwee-Xian
Gautam, Natarajan
description The dynamic and unpredictable nature of energy harvesting sources available for wireless sensor networks, and the time variation in network statistics like packet transmission rates and link qualities, necessitate the use of adaptive duty cycling techniques. Such adaptive control allows sensor nodes to achieve long-run energy neutrality, where energy supply and demand are balanced in a dynamic environment such that the nodes function continuously. In this paper, we develop a new framework enabling an adaptive duty cycling scheme for sensor networks that takes into account the node battery level, ambient energy that can be harvested, and application-level QoS requirements. We model the system as a Markov decision process (MDP) that modifies its state transition policy using reinforcement learning. The MDP uses continuous time Markov chains (CTMCs) to model the network state of a node to obtain key QoS metrics like latency, loss probability, and power consumption, as well as to model the node battery level taking into account physically feasible rates of change. We show that with an appropriate choice of the reward function for the MDP, as well as a suitable learning rate, exploitation probability, and discount factor, the need to maintain minimum QoS levels for optimal network performance can be balanced with the need to promote the maintenance of a finite battery level to ensure node operability. Extensive simulation results show the benefit of our algorithm for different reward functions and parameters.
doi_str_mv 10.1109/JSAC.2015.2478717
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_1738834045</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>7264968</ieee_id><sourcerecordid>3883599721</sourcerecordid><originalsourceid>FETCH-LOGICAL-c369t-b7454b671fda5b6fa2f02aab719e18cfcbff89dc8cea7f22d4493b2f17e120973</originalsourceid><addsrcrecordid>eNpdkctOwzAQRS0EEuXxAYiNJTZsUvxK7Cyr8BaFRUEsIycZgyGNi50U9e9xVMSCzYxmdO7oai5CJ5RMKSX5xf1iVkwZoemUCakklTtoQtNUJYQQtYsmRHKexH22jw5C-CCECqHYBIVZo1e9XQO-HPoNLjZ1a7s3bDu8gC44jx-h_3b-M-BX27_jqw782wbfar-G0I_kSxhr4bo4DW4IybNdAp5r_-nWuHjX8ZLuGnzdDrbBc9dAG47QntFtgOPffoherq-ei9vk4enmrpg9JDXP8j6ppEhFlUlqGp1WmdHMEKZ1JWkOVNWmroxReVOrGrQ0jDVC5LxihkqgjOSSH6Lz7d2Vd19D9FsubaihbXUH0WlJpVSEj5-I6Nk_9MMNvovuIsWV4oKINFJ0S9XeheDBlCtvl9pvSkrKMYZyjKEcYyh_Y4ia063GAsAfL1km8kzxH7sehHg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1738834045</pqid></control><display><type>article</type><title>Adaptive Duty Cycling in Sensor Networks With Energy Harvesting Using Continuous-Time Markov Chain and Fluid Models</title><source>IEEE Electronic Library (IEL)</source><creator>Chan, Wai Hong Ronald ; Zhang, Pengfei ; Nevat, Ido ; Nagarajan, Sai Ganesh ; Valera, Alvin C. ; Tan, Hwee-Xian ; Gautam, Natarajan</creator><creatorcontrib>Chan, Wai Hong Ronald ; Zhang, Pengfei ; Nevat, Ido ; Nagarajan, Sai Ganesh ; Valera, Alvin C. ; Tan, Hwee-Xian ; Gautam, Natarajan</creatorcontrib><description>The dynamic and unpredictable nature of energy harvesting sources available for wireless sensor networks, and the time variation in network statistics like packet transmission rates and link qualities, necessitate the use of adaptive duty cycling techniques. Such adaptive control allows sensor nodes to achieve long-run energy neutrality, where energy supply and demand are balanced in a dynamic environment such that the nodes function continuously. In this paper, we develop a new framework enabling an adaptive duty cycling scheme for sensor networks that takes into account the node battery level, ambient energy that can be harvested, and application-level QoS requirements. We model the system as a Markov decision process (MDP) that modifies its state transition policy using reinforcement learning. The MDP uses continuous time Markov chains (CTMCs) to model the network state of a node to obtain key QoS metrics like latency, loss probability, and power consumption, as well as to model the node battery level taking into account physically feasible rates of change. We show that with an appropriate choice of the reward function for the MDP, as well as a suitable learning rate, exploitation probability, and discount factor, the need to maintain minimum QoS levels for optimal network performance can be balanced with the need to promote the maintenance of a finite battery level to ensure node operability. Extensive simulation results show the benefit of our algorithm for different reward functions and parameters.</description><identifier>ISSN: 0733-8716</identifier><identifier>EISSN: 1558-0008</identifier><identifier>DOI: 10.1109/JSAC.2015.2478717</identifier><identifier>CODEN: ISACEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>adaptive duty cycle ; Adaptive systems ; Batteries ; Battery ; continuous-time Markov chain ; Cycles ; Dynamics ; Energy harvesting ; fluid model ; Markov analysis ; Markov decision process ; Mathematical models ; Measurement ; Networks ; Power demand ; Quality of service ; Quality of service architectures ; reinforcement learning ; Sensors ; Sustainable development ; Wireless sensor networks</subject><ispartof>IEEE journal on selected areas in communications, 2015-12, Vol.33 (12), p.2687-2700</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Dec 2015</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c369t-b7454b671fda5b6fa2f02aab719e18cfcbff89dc8cea7f22d4493b2f17e120973</citedby><cites>FETCH-LOGICAL-c369t-b7454b671fda5b6fa2f02aab719e18cfcbff89dc8cea7f22d4493b2f17e120973</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/7264968$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54735</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/7264968$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Chan, Wai Hong Ronald</creatorcontrib><creatorcontrib>Zhang, Pengfei</creatorcontrib><creatorcontrib>Nevat, Ido</creatorcontrib><creatorcontrib>Nagarajan, Sai Ganesh</creatorcontrib><creatorcontrib>Valera, Alvin C.</creatorcontrib><creatorcontrib>Tan, Hwee-Xian</creatorcontrib><creatorcontrib>Gautam, Natarajan</creatorcontrib><title>Adaptive Duty Cycling in Sensor Networks With Energy Harvesting Using Continuous-Time Markov Chain and Fluid Models</title><title>IEEE journal on selected areas in communications</title><addtitle>J-SAC</addtitle><description>The dynamic and unpredictable nature of energy harvesting sources available for wireless sensor networks, and the time variation in network statistics like packet transmission rates and link qualities, necessitate the use of adaptive duty cycling techniques. Such adaptive control allows sensor nodes to achieve long-run energy neutrality, where energy supply and demand are balanced in a dynamic environment such that the nodes function continuously. In this paper, we develop a new framework enabling an adaptive duty cycling scheme for sensor networks that takes into account the node battery level, ambient energy that can be harvested, and application-level QoS requirements. We model the system as a Markov decision process (MDP) that modifies its state transition policy using reinforcement learning. The MDP uses continuous time Markov chains (CTMCs) to model the network state of a node to obtain key QoS metrics like latency, loss probability, and power consumption, as well as to model the node battery level taking into account physically feasible rates of change. We show that with an appropriate choice of the reward function for the MDP, as well as a suitable learning rate, exploitation probability, and discount factor, the need to maintain minimum QoS levels for optimal network performance can be balanced with the need to promote the maintenance of a finite battery level to ensure node operability. Extensive simulation results show the benefit of our algorithm for different reward functions and parameters.</description><subject>adaptive duty cycle</subject><subject>Adaptive systems</subject><subject>Batteries</subject><subject>Battery</subject><subject>continuous-time Markov chain</subject><subject>Cycles</subject><subject>Dynamics</subject><subject>Energy harvesting</subject><subject>fluid model</subject><subject>Markov analysis</subject><subject>Markov decision process</subject><subject>Mathematical models</subject><subject>Measurement</subject><subject>Networks</subject><subject>Power demand</subject><subject>Quality of service</subject><subject>Quality of service architectures</subject><subject>reinforcement learning</subject><subject>Sensors</subject><subject>Sustainable development</subject><subject>Wireless sensor networks</subject><issn>0733-8716</issn><issn>1558-0008</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2015</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkctOwzAQRS0EEuXxAYiNJTZsUvxK7Cyr8BaFRUEsIycZgyGNi50U9e9xVMSCzYxmdO7oai5CJ5RMKSX5xf1iVkwZoemUCakklTtoQtNUJYQQtYsmRHKexH22jw5C-CCECqHYBIVZo1e9XQO-HPoNLjZ1a7s3bDu8gC44jx-h_3b-M-BX27_jqw782wbfar-G0I_kSxhr4bo4DW4IybNdAp5r_-nWuHjX8ZLuGnzdDrbBc9dAG47QntFtgOPffoherq-ei9vk4enmrpg9JDXP8j6ppEhFlUlqGp1WmdHMEKZ1JWkOVNWmroxReVOrGrQ0jDVC5LxihkqgjOSSH6Lz7d2Vd19D9FsubaihbXUH0WlJpVSEj5-I6Nk_9MMNvovuIsWV4oKINFJ0S9XeheDBlCtvl9pvSkrKMYZyjKEcYyh_Y4ia063GAsAfL1km8kzxH7sehHg</recordid><startdate>201512</startdate><enddate>201512</enddate><creator>Chan, Wai Hong Ronald</creator><creator>Zhang, Pengfei</creator><creator>Nevat, Ido</creator><creator>Nagarajan, Sai Ganesh</creator><creator>Valera, Alvin C.</creator><creator>Tan, Hwee-Xian</creator><creator>Gautam, Natarajan</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><scope>F28</scope><scope>FR3</scope></search><sort><creationdate>201512</creationdate><title>Adaptive Duty Cycling in Sensor Networks With Energy Harvesting Using Continuous-Time Markov Chain and Fluid Models</title><author>Chan, Wai Hong Ronald ; Zhang, Pengfei ; Nevat, Ido ; Nagarajan, Sai Ganesh ; Valera, Alvin C. ; Tan, Hwee-Xian ; Gautam, Natarajan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c369t-b7454b671fda5b6fa2f02aab719e18cfcbff89dc8cea7f22d4493b2f17e120973</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2015</creationdate><topic>adaptive duty cycle</topic><topic>Adaptive systems</topic><topic>Batteries</topic><topic>Battery</topic><topic>continuous-time Markov chain</topic><topic>Cycles</topic><topic>Dynamics</topic><topic>Energy harvesting</topic><topic>fluid model</topic><topic>Markov analysis</topic><topic>Markov decision process</topic><topic>Mathematical models</topic><topic>Measurement</topic><topic>Networks</topic><topic>Power demand</topic><topic>Quality of service</topic><topic>Quality of service architectures</topic><topic>reinforcement learning</topic><topic>Sensors</topic><topic>Sustainable development</topic><topic>Wireless sensor networks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chan, Wai Hong Ronald</creatorcontrib><creatorcontrib>Zhang, Pengfei</creatorcontrib><creatorcontrib>Nevat, Ido</creatorcontrib><creatorcontrib>Nagarajan, Sai Ganesh</creatorcontrib><creatorcontrib>Valera, Alvin C.</creatorcontrib><creatorcontrib>Tan, Hwee-Xian</creatorcontrib><creatorcontrib>Gautam, Natarajan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><jtitle>IEEE journal on selected areas in communications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chan, Wai Hong Ronald</au><au>Zhang, Pengfei</au><au>Nevat, Ido</au><au>Nagarajan, Sai Ganesh</au><au>Valera, Alvin C.</au><au>Tan, Hwee-Xian</au><au>Gautam, Natarajan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adaptive Duty Cycling in Sensor Networks With Energy Harvesting Using Continuous-Time Markov Chain and Fluid Models</atitle><jtitle>IEEE journal on selected areas in communications</jtitle><stitle>J-SAC</stitle><date>2015-12</date><risdate>2015</risdate><volume>33</volume><issue>12</issue><spage>2687</spage><epage>2700</epage><pages>2687-2700</pages><issn>0733-8716</issn><eissn>1558-0008</eissn><coden>ISACEM</coden><abstract>The dynamic and unpredictable nature of energy harvesting sources available for wireless sensor networks, and the time variation in network statistics like packet transmission rates and link qualities, necessitate the use of adaptive duty cycling techniques. Such adaptive control allows sensor nodes to achieve long-run energy neutrality, where energy supply and demand are balanced in a dynamic environment such that the nodes function continuously. In this paper, we develop a new framework enabling an adaptive duty cycling scheme for sensor networks that takes into account the node battery level, ambient energy that can be harvested, and application-level QoS requirements. We model the system as a Markov decision process (MDP) that modifies its state transition policy using reinforcement learning. The MDP uses continuous time Markov chains (CTMCs) to model the network state of a node to obtain key QoS metrics like latency, loss probability, and power consumption, as well as to model the node battery level taking into account physically feasible rates of change. We show that with an appropriate choice of the reward function for the MDP, as well as a suitable learning rate, exploitation probability, and discount factor, the need to maintain minimum QoS levels for optimal network performance can be balanced with the need to promote the maintenance of a finite battery level to ensure node operability. Extensive simulation results show the benefit of our algorithm for different reward functions and parameters.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/JSAC.2015.2478717</doi><tpages>14</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0733-8716
ispartof IEEE journal on selected areas in communications, 2015-12, Vol.33 (12), p.2687-2700
issn 0733-8716
1558-0008
language eng
recordid cdi_proquest_journals_1738834045
source IEEE Electronic Library (IEL)
subjects adaptive duty cycle
Adaptive systems
Batteries
Battery
continuous-time Markov chain
Cycles
Dynamics
Energy harvesting
fluid model
Markov analysis
Markov decision process
Mathematical models
Measurement
Networks
Power demand
Quality of service
Quality of service architectures
reinforcement learning
Sensors
Sustainable development
Wireless sensor networks
title Adaptive Duty Cycling in Sensor Networks With Energy Harvesting Using Continuous-Time Markov Chain and Fluid Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T08%3A27%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adaptive%20Duty%20Cycling%20in%20Sensor%20Networks%20With%20Energy%20Harvesting%20Using%20Continuous-Time%20Markov%20Chain%20and%20Fluid%20Models&rft.jtitle=IEEE%20journal%20on%20selected%20areas%20in%20communications&rft.au=Chan,%20Wai%20Hong%20Ronald&rft.date=2015-12&rft.volume=33&rft.issue=12&rft.spage=2687&rft.epage=2700&rft.pages=2687-2700&rft.issn=0733-8716&rft.eissn=1558-0008&rft.coden=ISACEM&rft_id=info:doi/10.1109/JSAC.2015.2478717&rft_dat=%3Cproquest_RIE%3E3883599721%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1738834045&rft_id=info:pmid/&rft_ieee_id=7264968&rfr_iscdi=true