Robust Power Management via Learning and Game Design

Descending to Stability: Robust Power Control for Stochastic Wireless Networks The explosive growth of smart devices together with their dense connections via wireless networks have become a ubiquitous feature of smart cities. These devices, typically light-weight and battery-driven, burn power when...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Operations research 2021-01, Vol.69 (1), p.331-345
Hauptverfasser: Zhou, Zhengyuan, Mertikopoulos, Panayotis, Moustakas, Aris L., Bambos, Nicholas, Glynn, Peter
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Descending to Stability: Robust Power Control for Stochastic Wireless Networks The explosive growth of smart devices together with their dense connections via wireless networks have become a ubiquitous feature of smart cities. These devices, typically light-weight and battery-driven, burn power when they communicate with one another to form a functional ecosystem. As such, it is both theoretically interesting and practically impactful to design robust power control algorithms that operate smoothly under the challenges brought forth by dense networks in our current Internet-of-Things era. In “Robust Power Management via Learning and Game Design,” Z. Zhou, P. Mertikopoulos, A. Moustakas, N. Bambos and P. Glynn take a novel hybrid approach that combines learning with game design to build a novel and efficient power control algorithm that is provably stable and optimal, thereby not only contributing deployable algorithms with strong performance but also opening up new avenues for distributed algorithm design at large. We consider the target-rate power management problem for wireless networks; and we propose two simple, distributed power management schemes that regulate power in a provably robust manner by efficiently leveraging past information. Both schemes are obtained via a combined approach of learning and “game design” where we (1) design a game with suitable payoff functions such that the optimal joint power profile in the original power management problem is the unique Nash equilibrium of the designed game; (2) derive distributed power management algorithms by directing the networks’ users to employ a no-regret learning algorithm to maximize their individual utility over time. To establish convergence, we focus on the well-known online eager gradient descent learning algorithm in the class of weighted strongly monotone games. In this class of games, we show that when players only have access to imperfect stochastic feedback, multiagent online eager gradient descent converges to the unique Nash equilibrium in mean square at a O ( 1 T ) rate. In the context of power management in static networks, we show that the designed games are weighted strongly monotone if the network is feasible (i.e., when all users can concurrently attain their target rates). This allows us to derive a geometric convergence rate to the joint optimal transmission power. More importantly, in stochastic networks where channel quality fluctuates over time, the designed games are also
ISSN:0030-364X
1526-5463
DOI:10.1287/opre.2020.1996