Next-gen resource optimization in NB-IoT networks: Harnessing soft actor–critic reinforcement learning

Resource allocation in Narrowband Internet of Things (NB-IoT) networks is a complex challenge due to dynamic user demands, variable channel conditions, and distance considerations. Traditional approaches often struggle to adapt to the dynamic nature of these environments. In this study, we leverage...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer networks (Amsterdam, Netherlands : 1999) Netherlands : 1999), 2024-10, Vol.252, p.110670, Article 110670
Hauptverfasser: Anbazhagan, S., Mugelan, R.K.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Resource allocation in Narrowband Internet of Things (NB-IoT) networks is a complex challenge due to dynamic user demands, variable channel conditions, and distance considerations. Traditional approaches often struggle to adapt to the dynamic nature of these environments. In this study, we leverage reinforcement learning (RL) to address the intricate nature of NB-IoT resource allocation. Specifically, we employ the Soft Actor–Critic (SAC) algorithm, comparing its performance against conventional RL algorithms such as Deep Q-Network (DQN) and Proximal Policy Optimization (PPO). The Soft Actor–Critic (SAC) algorithm is employed to train an agent for adaptive resource allocation, considering energy efficiency, throughput, latency, fairness, and interference constraints. The agent adeptly balances these objectives through an intricate reward structure and penalty mechanisms. Through comprehensive analysis, we present performance metrics, including total reward, energy efficiency, throughput, fairness, and latency, showcasing the efficacy of SAC when compared to DQN and PPO. Our findings underscore the efficiency of SAC in optimizing resource allocation in NB-IoT networks, offering a promising solution to the complexities inherent in such dynamic environments. Resource allocation in Narrowband Internet of Things (NB-IoT) networks presents a complex challenge due to dynamic user demands, variable channel conditions, and distance considerations. Traditional approaches often struggle to adapt to these dynamic environments. This study leverages reinforcement learning (RL), specifically the Soft Actor–Critic (SAC) algorithm, to address the intricacies of NB-IoT resource allocation. We compare SAC’s performance against conventional RL algorithms, including Deep Q-Network (DQN) and Proximal Policy Optimization (PPO). The SAC algorithm is utilized to train an agent for adaptive resource allocation, focusing on energy efficiency, throughput, latency, fairness, interference constraints, recovery time, and long-term performance stability. To demonstrate the scalability and effectiveness of SAC, we conducted experiments on NB-IoT networks with varying deployment types and configurations, including standard urban and suburban, high-density urban, industrial IoT, rural and low-density, and IoT service providers. To assess generalization capability, we tested SAC across applications like smart metering, smart cities, smart agriculture, and asset tracking & management. Our com
ISSN:1389-1286
DOI:10.1016/j.comnet.2024.110670