Deep Reinforcement Learning-based Resource Allocation for 5G Machine-type Communication in Active Distribution Networks with Time-varying Interference

Active distribution networks (ADNs) can solve the problem of grid compatibility and large-scale, intermittent, renewable energy applications. As the core part of ADNs, advanced metering infrastructure (AMI) meets the reliability requirements of the system for monitoring, diagnosis and control by ext...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Mobile networks and applications 2022-12, Vol.27 (6), p.2264-2279
Hauptverfasser: Li, Qiyue, Cheng, Hong, Yang, Yangzhao, Tang, Haochen, Wang, Junbo, Luo, Guojun, Sun, Wei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Active distribution networks (ADNs) can solve the problem of grid compatibility and large-scale, intermittent, renewable energy applications. As the core part of ADNs, advanced metering infrastructure (AMI) meets the reliability requirements of the system for monitoring, diagnosis and control by extensive data acquisition and effective data transmission. The fifth-generation (5G) New Radio (NR) with ultra-reliable low-latency communication (URLLC) can be applied in ADNs for data transmission. However, in ADNs, the electromagnetic environment is complex, and the interference is diverse and time-varying. This scenario creates great challenges to data transmission in 5G communication networks. In this paper, we model the data transmission in 5G, design a rolling solution framework from predicting interference to improving data repetition, and then allocate wireless resources. To adapt resource allocation to time-varying interference, we propose an interference prediction algorithm to accurately estimate the interference distribution in the whole scheduling cycle. Moreover, to meet the second-level, resource scheduling requirement, we model resource allocation as a dynamic programming problem with the goal of maximizing energy efficiency and solve it by a DDQN-based reinforcement learning algorithm.
ISSN:1383-469X
1572-8153
DOI:10.1007/s11036-022-02006-5