Proactive and AoI-aware Failure Recovery for Stateful NFV-enabled Zero-Touch 6G Networks: Model-Free DRL Approach
In this paper, we propose a Zero-Touch, deep reinforcement learning (DRL)-based Proactive Failure Recovery framework called ZT-PFR for stateful network function virtualization (NFV)-enabled networks. To this end, we formulate a resource-efficient optimization problem minimizing the network cost func...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2021-09 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we propose a Zero-Touch, deep reinforcement learning (DRL)-based Proactive Failure Recovery framework called ZT-PFR for stateful network function virtualization (NFV)-enabled networks. To this end, we formulate a resource-efficient optimization problem minimizing the network cost function including resource cost and wrong decision penalty. As a solution, we propose state-of-the-art DRL-based methods such as soft-actor-critic (SAC) and proximal-policy-optimization (PPO). In addition, to train and test our DRL agents, we propose a novel impending-failure model. Moreover, to keep network status information at an acceptable freshness level for appropriate decision-making, we apply the concept of age of information to strike a balance between the event and scheduling based monitoring. Several key systems and DRL algorithm design insights for ZT-PFR are drawn from our analysis and simulation results. For example, we use a hybrid neural network, consisting long short-term memory layers in the DRL agents structure, to capture impending-failures time dependency. |
---|---|
ISSN: | 2331-8422 |
DOI: | 10.48550/arxiv.2103.03817 |