Resource allocation of fog radio access network based on deep reinforcement learning

With the development of energy harvesting technologies and smart grid, the future trend of radio access networks will present a multi‐source power supply. In this article, joint renewable energy cooperation and resource allocation scheme of the fog radio access networks (F‐RANs) with hybrid power su...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Engineering reports (Hoboken, N.J.) N.J.), 2022-05, Vol.4 (5), p.n/a
Hauptverfasser: Tan, Jingru, Guan, Wenbo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:With the development of energy harvesting technologies and smart grid, the future trend of radio access networks will present a multi‐source power supply. In this article, joint renewable energy cooperation and resource allocation scheme of the fog radio access networks (F‐RANs) with hybrid power supplies (including both the conventional grid and renewable energy sources) is studied. In this article, our objective is to maximize the average throughput of F‐RAN architecture with hybrid energy sources while satisfying the constraints of signal to noise ratio (SNR), available bandwidth, and energy harvesting. To solve this problem, the dynamic power allocation scheme in the network is studied by using Q‐learning and Deep Q Network respectively. Simulation results show that the proposed two algorithms have low complexity and can improve the average throughput of the whole network compared with other traditional algorithms. With the development of energy harvesting technologies and smart grid, the future trend of radio access networks will present a multi‐source power supply. In this article, joint renewable energy cooperation and resource allocation scheme of the fog radio access networks (F‐RANs) with hybrid power supplies (including both the conventional grid and renewable energy sources) is studied. In this article, our objective is to maximize the average throughput of F‐RAN architecture with hybrid energy sources while satisfying the constraints of signal to noise ratio (SNR), available bandwidth, and energy harvesting. To solve this problem, the dynamic power allocation scheme in the network is studied by using Q‐learning and Deep Q Network respectively. Simulation results show that the proposed two algorithms have low complexity and can improve the average throughput of the whole network compared with other traditional algorithms.
ISSN:2577-8196
2577-8196
DOI:10.1002/eng2.12497