Energy and spectrum efficient cell switch-off with channel and power allocation in ultra-dense networks: A deep reinforcement learning approach
The high density of small cells in the Ultra-Dense Network (UDN) has increased the capacity and the coverage of Fifth Generation (5G) cellular networks. However, with increasing the number of Small Base Stations (SBSs), energy consumption rises sharply. One suggested method to reduce energy consumpt...
Gespeichert in:
Veröffentlicht in: | Computer networks (Amsterdam, Netherlands : 1999) Netherlands : 1999), 2023-10, Vol.234, p.109912, Article 109912 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The high density of small cells in the Ultra-Dense Network (UDN) has increased the capacity and the coverage of Fifth Generation (5G) cellular networks. However, with increasing the number of Small Base Stations (SBSs), energy consumption rises sharply. One suggested method to reduce energy consumption is to manage the SBS On/Off switching. Moreover, due to spectrum constraints, the Power Control and Resource Allocation (PCRA) are other significant issues in UDN, which affect the Energy Efficiency (EE) and the Spectrum Efficiency (SE). Recent works in UDN have not presented the optimal SBSs On/Off switching and PCRA technique simultaneously to maximize the EE and the SE while ensuring Quality of Service (QoS) requirements of User Equipments (UE). In this paper, a distributed method based on a multi-agent Deep Q-Network (DQN) is proposed to deal with the mentioned challenges simultaneously. Therefore, each SBS can learn a policy for managing On/Off switching and downlink PCRA using two DQNs. The proposed method seeks to optimize the EE and the SE as well as guarantee the minimum required data rate of UEs. Simulation results show that the proposed method improves the EE and the SE compared to previous solutions. Furthermore, unlike previous distributed approaches that use the UEs as learning agents, the proposed method uses the SBSs as agents. Thus, the signaling overhead and computational complexity of the UEs decrease. |
---|---|
ISSN: | 1389-1286 1872-7069 |
DOI: | 10.1016/j.comnet.2023.109912 |