Robust Power Management via Learning and Game Design
Descending to Stability: Robust Power Control for Stochastic Wireless Networks The explosive growth of smart devices together with their dense connections via wireless networks have become a ubiquitous feature of smart cities. These devices, typically light-weight and battery-driven, burn power when...
Gespeichert in:
Veröffentlicht in: | Operations research 2021-01, Vol.69 (1), p.331-345 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 345 |
---|---|
container_issue | 1 |
container_start_page | 331 |
container_title | Operations research |
container_volume | 69 |
creator | Zhou, Zhengyuan Mertikopoulos, Panayotis Moustakas, Aris L. Bambos, Nicholas Glynn, Peter |
description | Descending to Stability: Robust Power Control for Stochastic Wireless Networks
The explosive growth of smart devices together with their dense connections via wireless networks have become a ubiquitous feature of smart cities. These devices, typically light-weight and battery-driven, burn power when they communicate with one another to form a functional ecosystem. As such, it is both theoretically interesting and practically impactful to design robust power control algorithms that operate smoothly under the challenges brought forth by dense networks in our current Internet-of-Things era. In “Robust Power Management via Learning and Game Design,” Z. Zhou, P. Mertikopoulos, A. Moustakas, N. Bambos and P. Glynn take a novel hybrid approach that combines learning with game design to build a novel and efficient power control algorithm that is provably stable and optimal, thereby not only contributing deployable algorithms with strong performance but also opening up new avenues for distributed algorithm design at large.
We consider the target-rate power management problem for wireless networks; and we propose two simple, distributed power management schemes that regulate power in a provably robust manner by efficiently leveraging past information. Both schemes are obtained via a combined approach of learning and “game design” where we (1) design a game with suitable payoff functions such that the optimal joint power profile in the original power management problem is the unique Nash equilibrium of the designed game; (2) derive distributed power management algorithms by directing the networks’ users to employ a no-regret learning algorithm to maximize their individual utility over time. To establish convergence, we focus on the well-known online eager gradient descent learning algorithm in the class of weighted strongly monotone games. In this class of games, we show that when players only have access to imperfect stochastic feedback, multiagent online eager gradient descent converges to the unique Nash equilibrium in mean square at a
O
(
1
T
)
rate.
In the context of power management in static networks, we show that the designed games are weighted strongly monotone if the network is feasible (i.e., when all users can concurrently attain their target rates). This allows us to derive a geometric convergence rate to the joint optimal transmission power. More importantly, in stochastic networks where channel quality fluctuates over time, the designed games are also |
doi_str_mv | 10.1287/opre.2020.1996 |
format | Article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2500418711</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A656683783</galeid><sourcerecordid>A656683783</sourcerecordid><originalsourceid>FETCH-LOGICAL-c475t-98fb3656dd2e9a93e6af1d3fa2274598f4003aab7df2f0c2405d2e72a291b1ab3</originalsourceid><addsrcrecordid>eNqFkM1Lw0AQxRdRsFavngOC4CF1v5McS9VWiCii4G2ZJJuY0u7W3aTqf--GiFdPw8z83szjIXRO8IzQNLm2O6dnFNPQZpk8QBMiqIwFl-wQTTBmOGaSvx2jE-_XGONMSDFB_NkWve-iJ_upXfQABhq91aaL9i1EuQZnWtNEYKpoCVsd3WjfNuYUHdWw8frst07R693ty2IV54_L-8U8j0ueiC7O0rpgUsiqojqDjGkJNalYDZQmXIQtD64AiqSqaY1LyrEIZEKBZqQgULApuhrvvsNG7Vy7BfetLLRqNc_VMMNEcsY43pPAXozsztmPXvtOrW3vTLCnqMCYkzQhA3U5Ug1stGpNaU2nv7oGeu-VmgezMmVJygI4G8HSWe-drv_-E6yGuNUQtxriVkPcQRCPgtbU1m39f_wPcEd_Cg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2500418711</pqid></control><display><type>article</type><title>Robust Power Management via Learning and Game Design</title><source>Informs</source><creator>Zhou, Zhengyuan ; Mertikopoulos, Panayotis ; Moustakas, Aris L. ; Bambos, Nicholas ; Glynn, Peter</creator><creatorcontrib>Zhou, Zhengyuan ; Mertikopoulos, Panayotis ; Moustakas, Aris L. ; Bambos, Nicholas ; Glynn, Peter</creatorcontrib><description>Descending to Stability: Robust Power Control for Stochastic Wireless Networks
The explosive growth of smart devices together with their dense connections via wireless networks have become a ubiquitous feature of smart cities. These devices, typically light-weight and battery-driven, burn power when they communicate with one another to form a functional ecosystem. As such, it is both theoretically interesting and practically impactful to design robust power control algorithms that operate smoothly under the challenges brought forth by dense networks in our current Internet-of-Things era. In “Robust Power Management via Learning and Game Design,” Z. Zhou, P. Mertikopoulos, A. Moustakas, N. Bambos and P. Glynn take a novel hybrid approach that combines learning with game design to build a novel and efficient power control algorithm that is provably stable and optimal, thereby not only contributing deployable algorithms with strong performance but also opening up new avenues for distributed algorithm design at large.
We consider the target-rate power management problem for wireless networks; and we propose two simple, distributed power management schemes that regulate power in a provably robust manner by efficiently leveraging past information. Both schemes are obtained via a combined approach of learning and “game design” where we (1) design a game with suitable payoff functions such that the optimal joint power profile in the original power management problem is the unique Nash equilibrium of the designed game; (2) derive distributed power management algorithms by directing the networks’ users to employ a no-regret learning algorithm to maximize their individual utility over time. To establish convergence, we focus on the well-known online eager gradient descent learning algorithm in the class of weighted strongly monotone games. In this class of games, we show that when players only have access to imperfect stochastic feedback, multiagent online eager gradient descent converges to the unique Nash equilibrium in mean square at a
O
(
1
T
)
rate.
In the context of power management in static networks, we show that the designed games are weighted strongly monotone if the network is feasible (i.e., when all users can concurrently attain their target rates). This allows us to derive a geometric convergence rate to the joint optimal transmission power. More importantly, in stochastic networks where channel quality fluctuates over time, the designed games are also weighted strongly monotone and the proposed algorithms converge in mean square to the joint optimal transmission power at a
O
(
1
T
)
rate, even when the network is only feasible on average (i.e., users may be unable to meet their requirements with positive probability). This comes in stark contrast to existing algorithms (like the seminal Foschini–Miljanic algorithm and its variants) that may fail to converge altogether.</description><identifier>ISSN: 0030-364X</identifier><identifier>EISSN: 1526-5463</identifier><identifier>DOI: 10.1287/opre.2020.1996</identifier><language>eng</language><publisher>Linthicum: INFORMS</publisher><subject>Algorithms ; Analysis ; Computer Science ; computer science: artificial intelligence ; Convergence ; Design ; Electric power distribution ; Energy management systems ; Energy use ; Game theory ; Games ; games/group decisions ; Machine Learning ; Mathematics ; Methods ; Mobile communication systems ; Multiagent systems ; Nash equilibrium ; online learning ; Operations research ; Optimization and Control ; Power management ; Robustness ; Wireless communication systems ; wireless network ; Wireless networks</subject><ispartof>Operations research, 2021-01, Vol.69 (1), p.331-345</ispartof><rights>Copyright Institute for Operations Research and the Management Sciences Jan/Feb 2021</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c475t-98fb3656dd2e9a93e6af1d3fa2274598f4003aab7df2f0c2405d2e72a291b1ab3</citedby><cites>FETCH-LOGICAL-c475t-98fb3656dd2e9a93e6af1d3fa2274598f4003aab7df2f0c2405d2e72a291b1ab3</cites><orcidid>0000-0002-0005-9411 ; 0000-0003-2026-9616</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://pubsonline.informs.org/doi/full/10.1287/opre.2020.1996$$EHTML$$P50$$Ginforms$$H</linktohtml><link.rule.ids>230,314,780,784,885,3692,27924,27925,62616</link.rule.ids><backlink>$$Uhttps://hal.science/hal-01643340$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhou, Zhengyuan</creatorcontrib><creatorcontrib>Mertikopoulos, Panayotis</creatorcontrib><creatorcontrib>Moustakas, Aris L.</creatorcontrib><creatorcontrib>Bambos, Nicholas</creatorcontrib><creatorcontrib>Glynn, Peter</creatorcontrib><title>Robust Power Management via Learning and Game Design</title><title>Operations research</title><description>Descending to Stability: Robust Power Control for Stochastic Wireless Networks
The explosive growth of smart devices together with their dense connections via wireless networks have become a ubiquitous feature of smart cities. These devices, typically light-weight and battery-driven, burn power when they communicate with one another to form a functional ecosystem. As such, it is both theoretically interesting and practically impactful to design robust power control algorithms that operate smoothly under the challenges brought forth by dense networks in our current Internet-of-Things era. In “Robust Power Management via Learning and Game Design,” Z. Zhou, P. Mertikopoulos, A. Moustakas, N. Bambos and P. Glynn take a novel hybrid approach that combines learning with game design to build a novel and efficient power control algorithm that is provably stable and optimal, thereby not only contributing deployable algorithms with strong performance but also opening up new avenues for distributed algorithm design at large.
We consider the target-rate power management problem for wireless networks; and we propose two simple, distributed power management schemes that regulate power in a provably robust manner by efficiently leveraging past information. Both schemes are obtained via a combined approach of learning and “game design” where we (1) design a game with suitable payoff functions such that the optimal joint power profile in the original power management problem is the unique Nash equilibrium of the designed game; (2) derive distributed power management algorithms by directing the networks’ users to employ a no-regret learning algorithm to maximize their individual utility over time. To establish convergence, we focus on the well-known online eager gradient descent learning algorithm in the class of weighted strongly monotone games. In this class of games, we show that when players only have access to imperfect stochastic feedback, multiagent online eager gradient descent converges to the unique Nash equilibrium in mean square at a
O
(
1
T
)
rate.
In the context of power management in static networks, we show that the designed games are weighted strongly monotone if the network is feasible (i.e., when all users can concurrently attain their target rates). This allows us to derive a geometric convergence rate to the joint optimal transmission power. More importantly, in stochastic networks where channel quality fluctuates over time, the designed games are also weighted strongly monotone and the proposed algorithms converge in mean square to the joint optimal transmission power at a
O
(
1
T
)
rate, even when the network is only feasible on average (i.e., users may be unable to meet their requirements with positive probability). This comes in stark contrast to existing algorithms (like the seminal Foschini–Miljanic algorithm and its variants) that may fail to converge altogether.</description><subject>Algorithms</subject><subject>Analysis</subject><subject>Computer Science</subject><subject>computer science: artificial intelligence</subject><subject>Convergence</subject><subject>Design</subject><subject>Electric power distribution</subject><subject>Energy management systems</subject><subject>Energy use</subject><subject>Game theory</subject><subject>Games</subject><subject>games/group decisions</subject><subject>Machine Learning</subject><subject>Mathematics</subject><subject>Methods</subject><subject>Mobile communication systems</subject><subject>Multiagent systems</subject><subject>Nash equilibrium</subject><subject>online learning</subject><subject>Operations research</subject><subject>Optimization and Control</subject><subject>Power management</subject><subject>Robustness</subject><subject>Wireless communication systems</subject><subject>wireless network</subject><subject>Wireless networks</subject><issn>0030-364X</issn><issn>1526-5463</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNqFkM1Lw0AQxRdRsFavngOC4CF1v5McS9VWiCii4G2ZJJuY0u7W3aTqf--GiFdPw8z83szjIXRO8IzQNLm2O6dnFNPQZpk8QBMiqIwFl-wQTTBmOGaSvx2jE-_XGONMSDFB_NkWve-iJ_upXfQABhq91aaL9i1EuQZnWtNEYKpoCVsd3WjfNuYUHdWw8frst07R693ty2IV54_L-8U8j0ueiC7O0rpgUsiqojqDjGkJNalYDZQmXIQtD64AiqSqaY1LyrEIZEKBZqQgULApuhrvvsNG7Vy7BfetLLRqNc_VMMNEcsY43pPAXozsztmPXvtOrW3vTLCnqMCYkzQhA3U5Ug1stGpNaU2nv7oGeu-VmgezMmVJygI4G8HSWe-drv_-E6yGuNUQtxriVkPcQRCPgtbU1m39f_wPcEd_Cg</recordid><startdate>202101</startdate><enddate>202101</enddate><creator>Zhou, Zhengyuan</creator><creator>Mertikopoulos, Panayotis</creator><creator>Moustakas, Aris L.</creator><creator>Bambos, Nicholas</creator><creator>Glynn, Peter</creator><general>INFORMS</general><general>Institute for Operations Research and the Management Sciences</general><scope>AAYXX</scope><scope>CITATION</scope><scope>JQ2</scope><scope>K9.</scope><scope>1XC</scope><scope>VOOES</scope><orcidid>https://orcid.org/0000-0002-0005-9411</orcidid><orcidid>https://orcid.org/0000-0003-2026-9616</orcidid></search><sort><creationdate>202101</creationdate><title>Robust Power Management via Learning and Game Design</title><author>Zhou, Zhengyuan ; Mertikopoulos, Panayotis ; Moustakas, Aris L. ; Bambos, Nicholas ; Glynn, Peter</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c475t-98fb3656dd2e9a93e6af1d3fa2274598f4003aab7df2f0c2405d2e72a291b1ab3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Analysis</topic><topic>Computer Science</topic><topic>computer science: artificial intelligence</topic><topic>Convergence</topic><topic>Design</topic><topic>Electric power distribution</topic><topic>Energy management systems</topic><topic>Energy use</topic><topic>Game theory</topic><topic>Games</topic><topic>games/group decisions</topic><topic>Machine Learning</topic><topic>Mathematics</topic><topic>Methods</topic><topic>Mobile communication systems</topic><topic>Multiagent systems</topic><topic>Nash equilibrium</topic><topic>online learning</topic><topic>Operations research</topic><topic>Optimization and Control</topic><topic>Power management</topic><topic>Robustness</topic><topic>Wireless communication systems</topic><topic>wireless network</topic><topic>Wireless networks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhou, Zhengyuan</creatorcontrib><creatorcontrib>Mertikopoulos, Panayotis</creatorcontrib><creatorcontrib>Moustakas, Aris L.</creatorcontrib><creatorcontrib>Bambos, Nicholas</creatorcontrib><creatorcontrib>Glynn, Peter</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Hyper Article en Ligne (HAL)</collection><collection>Hyper Article en Ligne (HAL) (Open Access)</collection><jtitle>Operations research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhou, Zhengyuan</au><au>Mertikopoulos, Panayotis</au><au>Moustakas, Aris L.</au><au>Bambos, Nicholas</au><au>Glynn, Peter</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Robust Power Management via Learning and Game Design</atitle><jtitle>Operations research</jtitle><date>2021-01</date><risdate>2021</risdate><volume>69</volume><issue>1</issue><spage>331</spage><epage>345</epage><pages>331-345</pages><issn>0030-364X</issn><eissn>1526-5463</eissn><abstract>Descending to Stability: Robust Power Control for Stochastic Wireless Networks
The explosive growth of smart devices together with their dense connections via wireless networks have become a ubiquitous feature of smart cities. These devices, typically light-weight and battery-driven, burn power when they communicate with one another to form a functional ecosystem. As such, it is both theoretically interesting and practically impactful to design robust power control algorithms that operate smoothly under the challenges brought forth by dense networks in our current Internet-of-Things era. In “Robust Power Management via Learning and Game Design,” Z. Zhou, P. Mertikopoulos, A. Moustakas, N. Bambos and P. Glynn take a novel hybrid approach that combines learning with game design to build a novel and efficient power control algorithm that is provably stable and optimal, thereby not only contributing deployable algorithms with strong performance but also opening up new avenues for distributed algorithm design at large.
We consider the target-rate power management problem for wireless networks; and we propose two simple, distributed power management schemes that regulate power in a provably robust manner by efficiently leveraging past information. Both schemes are obtained via a combined approach of learning and “game design” where we (1) design a game with suitable payoff functions such that the optimal joint power profile in the original power management problem is the unique Nash equilibrium of the designed game; (2) derive distributed power management algorithms by directing the networks’ users to employ a no-regret learning algorithm to maximize their individual utility over time. To establish convergence, we focus on the well-known online eager gradient descent learning algorithm in the class of weighted strongly monotone games. In this class of games, we show that when players only have access to imperfect stochastic feedback, multiagent online eager gradient descent converges to the unique Nash equilibrium in mean square at a
O
(
1
T
)
rate.
In the context of power management in static networks, we show that the designed games are weighted strongly monotone if the network is feasible (i.e., when all users can concurrently attain their target rates). This allows us to derive a geometric convergence rate to the joint optimal transmission power. More importantly, in stochastic networks where channel quality fluctuates over time, the designed games are also weighted strongly monotone and the proposed algorithms converge in mean square to the joint optimal transmission power at a
O
(
1
T
)
rate, even when the network is only feasible on average (i.e., users may be unable to meet their requirements with positive probability). This comes in stark contrast to existing algorithms (like the seminal Foschini–Miljanic algorithm and its variants) that may fail to converge altogether.</abstract><cop>Linthicum</cop><pub>INFORMS</pub><doi>10.1287/opre.2020.1996</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-0005-9411</orcidid><orcidid>https://orcid.org/0000-0003-2026-9616</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0030-364X |
ispartof | Operations research, 2021-01, Vol.69 (1), p.331-345 |
issn | 0030-364X 1526-5463 |
language | eng |
recordid | cdi_proquest_journals_2500418711 |
source | Informs |
subjects | Algorithms Analysis Computer Science computer science: artificial intelligence Convergence Design Electric power distribution Energy management systems Energy use Game theory Games games/group decisions Machine Learning Mathematics Methods Mobile communication systems Multiagent systems Nash equilibrium online learning Operations research Optimization and Control Power management Robustness Wireless communication systems wireless network Wireless networks |
title | Robust Power Management via Learning and Game Design |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T05%3A53%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Robust%20Power%20Management%20via%20Learning%20and%20Game%20Design&rft.jtitle=Operations%20research&rft.au=Zhou,%20Zhengyuan&rft.date=2021-01&rft.volume=69&rft.issue=1&rft.spage=331&rft.epage=345&rft.pages=331-345&rft.issn=0030-364X&rft.eissn=1526-5463&rft_id=info:doi/10.1287/opre.2020.1996&rft_dat=%3Cgale_proqu%3EA656683783%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2500418711&rft_id=info:pmid/&rft_galeid=A656683783&rfr_iscdi=true |