User Plane Function (UPF) Allocation for C-V2X Network Using Deep Reinforcement Learning
In this paper, we proposed an online learning method for predicting an allocation of User Plane Function (UPF) in Cellular Vehicle-to-Everything (C-V2X) networks integrated with Multi-Access Edge Computing (MEC). Our study employed Deep Reinforcement Learning (DRL) techniques, specifically Deep Q-Ne...
Gespeichert in:
Veröffentlicht in: | IEEE access 2025, Vol.13, p.4547-4561 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we proposed an online learning method for predicting an allocation of User Plane Function (UPF) in Cellular Vehicle-to-Everything (C-V2X) networks integrated with Multi-Access Edge Computing (MEC). Our study employed Deep Reinforcement Learning (DRL) techniques, specifically Deep Q-Network (DQN) and Actor-Critic (AC) algorithms. The DQN and AC algorithms were implemented to decide the optimal location of UPFs subject to vehicle positions and speed data of the vehicles. Our objective was to reduce the latency of communications between UPF and vehicles by placing the UPF(s) in optimal way. The simulation results showed that both DQN and AC algorithms reduced the latency significantly. We compared our proposed methods with the existing approaches which are K-mean Greedy Average and Greedy Average algorithms. The proposed AC algorithm achieved up to 40% reduction of average latency compared with the baseline methods when the placement of multiple UPFs are considered. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2024.3524886 |