Deep reinforcement learning based controller placement and optimal edge selection in SDN-based multi-access edge computing environments

•This paper studies controller placement and optimal edge selection in SDN-based multi-access edge computing environments.•A deep reinforcement learning based SDN controller placement algorithm is designed.•An efficient edge selection algorithm based on task stability and energy consumption minimiza...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of parallel and distributed computing 2024-11, Vol.193, p.104948, Article 104948
Hauptverfasser: Li, Chunlin, Liu, Jun, Ma, Ning, Zhang, Qingzhe, Zhong, Zhengwei, Jiang, Lincheng, Jia, Guolei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•This paper studies controller placement and optimal edge selection in SDN-based multi-access edge computing environments.•A deep reinforcement learning based SDN controller placement algorithm is designed.•An efficient edge selection algorithm based on task stability and energy consumption minimization is proposed.•The proposed controller placement scheme can reduce the average network delay and improve the load balancing effect between controllers.•The proposed edge selection algorithm has better performance in terms of task stability and energy consumption. Multi-Access Edge Computing (MEC) can provide computility close to the clients to decrease response time and enhance Quality of Service (QoS). However, the complex wireless network consists of various network hardware facilities with different communication protocols and Application Programming Interface (API), which result in the MEC system's high running costs and low running efficiency. To this end, Software-defined networking (SDN) is applied to MEC, which can support access to massive network devices and provide flexible and efficient management. The reasonable SDN controller scheme is crucial to enhance the performance of SDN-assisted MEC. At First, we used the Convolutional Neural Networks (CNN)-Long Short-Term Memory (LSTM) model to predict the network traffic to calculate the load. Then, the optimization objective is formulated by ensuring the load balance and minimizing the system cost. Finally, the Deep Reinforcement Learning (DRL) algorithm is used to obtain the optimal value. Based on the controller placement algorithm ensuring the load balancing, the dynamical edge selection method based on the Channel State Information (CSI) is proposed to optimize the task offloading, and according to CSI, the strategy of task queue execution is designed. Then, the task offloading problem is modeled by using queuing theory. Finally, dynamical edge selection based on Lyapunov's optimization is introduced to get the model solution. In the experiment studies, the assessment method evaluated the performance of two sets of baseline algorithms, including SAPKM, the PSO, the K-means, the LADMA, the LATA, and the OAOP. Compared to the baseline algorithms, the proposed algorithms can effectively reduce the average communication delay and total system energy consumption and improve the utilization of the SDN controller.
ISSN:0743-7315
DOI:10.1016/j.jpdc.2024.104948