RNN‐EdgeQL: An auto‐scaling and placement approach for SFC
Summary This paper proposes a prediction‐based scaling and placement of service function chains (SFCs) to improve service level agreement (SLA) and reduce operation cost. We used a variant of recurrent neural network (RNN) called gated recurrent unit (GRU) for resource demand prediction. Then, consi...
Gespeichert in:
Veröffentlicht in: | International journal of network management 2023-07, Vol.33 (4), p.n/a |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Summary
This paper proposes a prediction‐based scaling and placement of service function chains (SFCs) to improve service level agreement (SLA) and reduce operation cost. We used a variant of recurrent neural network (RNN) called gated recurrent unit (GRU) for resource demand prediction. Then, considering these predictions, we built an intuitive scale in/out algorithm. We also developed an algorithm that applies Q‐Learning on Edge computing environment (EdgeQL) to place these scaled‐out VNFs in appropriate locations. The integrated algorithm that combines prediction, scaling, and placement are called RNN‐EdgeQL. RNN‐EdgeQL (v2) is further improved to achieve application agnostic group level elasticity in the chain, independent of applications installed on the VNFs. We tested our algorithm on two realistic temporal dynamic load models including Internet traffic (Abilene) and an application specific traffic (Wiki) on an OpenStack testbed. The contribution of this article is threefold. First, prediction model prepares the target SFC for the upcoming load. Second, an application agnostic characteristics of the algorithm achieves the group‐level elasticity in SFC. Finally, the EdgeQL placement model minimizes the end‐to‐end path of an SFC in multi‐access edge computing (MEC) environment. As a result, RNN‐EdgeQL (v2) gives the lowest overall latency, lowest SLA violations, and lowest VNFs requirement, compared to RNN‐EdgeQL (v1) and Threshold‐Openstack default placement.
Proposed proactive scaling and placement of SFC using ML can achieve a group‐level application‐agnostic elasticity of SFC with reduced CAPEX and OPEX. It considers the edge computing environment and is validated with two realistic traffic loads on the actual physical OpenStack testbed. |
---|---|
ISSN: | 1055-7148 1099-1190 |
DOI: | 10.1002/nem.2213 |