Adaptive Joint Routing and Caching in Knowledge-Defined Networking: An Actor-Critic Deep Reinforcement Learning Approach

By integrating the software-defined networking (SDN) architecture with the machine learning-based knowledge plane, knowledge-defined networking (KDN) is revolutionizing established traffic engineering (TE) methodologies. This paper investigates the challenging joint routing and caching problem in KD...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on mobile computing 2024-12, p.1-18
Hauptverfasser: Xiao, Yang, Yu, Huihan, Yang, Ying, Wang, Yixing, Liu, Jun, Ansari, Nirwan
Format: Magazinearticle
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:By integrating the software-defined networking (SDN) architecture with the machine learning-based knowledge plane, knowledge-defined networking (KDN) is revolutionizing established traffic engineering (TE) methodologies. This paper investigates the challenging joint routing and caching problem in KDN-based networks, managing multiple traffic flows to improve long-term quality-of-service (QoS) performance. This challenge is formulated as a computationally expensive non-convex mixed- integer non-linear programming (MINLP) problem, which exceeds the capacity of heuristic methods to achieve near-optimal solutions. To address this issue, we present DRL-JRC, an actor-critic deep reinforcement learning (DRL) algorithm for adaptive joint routing and caching in KDN-based networks. DRL-JRC orchestrates the optimization of multiple QoS metrics, including end-to-end delay, packet loss rate, load balancing index, and hop count. During offline training, DRL-JRC employs proximal policy optimization (PPO) to smooth the policy optimization process. In addition, the learned policy can be seamlessly integrated with conventional caching solutions during online execution. Extensive experiments demonstrate the comprehensive superiority of DRL-JRC over baseline methods in various scenarios. Meanwhile, DRL-JRC consistently outperforms the heuristic baseline under partial policy deployment during execution. Compared to the average performance of the baseline methods, DRL-JRC reduces the end-to-end delay by 51.14% and the packet loss rate by 40.78%.
ISSN:1536-1233
DOI:10.1109/TMC.2024.3521247