Dynamic Multi-objective Service Function Chain Placement Based on Deep Reinforcement Learning

Service function chain placement is crucial to support services flexibility and diversity for different users and vendors. Specifically, this problem is proved to be NP-hard. Existing deep reinforcement learning based methods either can only handle a limited number of objectives, or their training t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE eTransactions on network and service management 2024-08, p.1-1
Hauptverfasser: Zhou, Cong, Zhao, Baokang, Tang, Fengxiao, Han, Biao, Wang, Baosheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Service function chain placement is crucial to support services flexibility and diversity for different users and vendors. Specifically, this problem is proved to be NP-hard. Existing deep reinforcement learning based methods either can only handle a limited number of objectives, or their training time are too long. Concomitantly, they are unable to satisfy when the number of objectives is dynamic. It is necessary to model service function chain placement as a multi-objective problem. The multi-objective problem can decomposed into multiple subproblems by the weight vectors. In this paper, we first reveal the relationship between weight vectors and solution position, which can reduce the training time to gain a better placement model. Then, we design a novel algorithm for the service function chain placement problem, called rzMODRL. The weight vectors are divided into zones for training in parallel, and the order is defined for the final models located at the end of a training process, which can save time and improve the quality of the model. Dynamic objective placement method is based on the high-dimensional model to avoid retraining for a low-dimensional placement. Evaluation results show that the proposed algorithms improve the service acceptance ratio up to 32% and the hyper-volume values with 14% in the multi-objective service function chain placement, where hyper-volume has been widely applied to evaluate the convergence and diversity simultaneously in multi-objective optimization. And it is also effective in solving the dynamic objective service function chain placement problem that the difference of average hyper-volume values is 10.44%
ISSN:1932-4537
DOI:10.1109/TNSM.2024.3446248