Development of an AI-Enabled Q-Agent for Making Data Offloading Decisions in a Multi-RAT Wireless Network

Data offloading is considered as a potential candidate for alleviating congestion on wireless networks and for improving user experience. However, due to the stochastic nature of the wireless networks, it is important to take optimal actions under different conditions such that the user experience i...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of Computer Networks and Communications 2024-01, Vol.2024, p.1-13
Hauptverfasser: Marvi, Murk, Aijaz, Adnan, Qureshi, Anam, Khurram, Muhammad
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Data offloading is considered as a potential candidate for alleviating congestion on wireless networks and for improving user experience. However, due to the stochastic nature of the wireless networks, it is important to take optimal actions under different conditions such that the user experience is enhanced and congestion on heavy-loaded radio access technologies (RATs) is reduced by offloading data through lower loaded RATs. Since artificial intelligence (AI)-based techniques can learn optimal actions and adapt to different conditions, in this work, we develop an AI-enabled Q-agent for making data offloading decisions in a multi-RAT wireless network. We employ a model-free Q-learning algorithm for training of the Q-agent. We use stochastic geometry as a tool for estimating the average data rate offered by the network in a given region by considering the effect of interference. We use the Markov process for modeling users’ mobility, that is, estimating the probability that a user is currently located in a region given its previous location. The user equipment (UE) plays the role of a Q-agent responsible for taking sequence of actions such that the long-term discounted cost for using network service is minimized. Q-agent performance has been evaluated and compared with the existing data offloading policies. The results suggest that the existing policies offer the best performance under specific situations. However, the Q-agent has learned to take near-optimal actions under different conditions. Thus, the Q-agent offers performance which is close to the best under different conditions.
ISSN:2090-7141
2090-715X
DOI:10.1155/2024/9571987