Efficient deep-reinforcement learning aware resource allocation in SDN-enabled fog paradigm
These days, fog computing is an emerging paradigm that offers ubiquitous and omnipresent latency-aware services to delay applications. However, due to the mobility features of applications, the resource allocation to the workload of applications in distributed dynamic fog networks is becoming a chal...
Gespeichert in:
Veröffentlicht in: | Automated software engineering 2022-05, Vol.29 (1), p.20, Article 20 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | These days, fog computing is an emerging paradigm that offers ubiquitous and omnipresent latency-aware services to delay applications. However, due to the mobility features of applications, the resource allocation to the workload of applications in distributed dynamic fog networks is becoming a challenging problem. This paper investigates the resource allocation problem in software define network (SDN) enable fog networks. Based on SDN, we distributed the fog network, which consists of many fog nodes. The considered problem contains many stringent constraints (e.g., mobility, deadline, and resource capacity), which are must be satisfied during the execution of applications. Offloading some tasks to fog system performance can be improved by reducing the latency and energy consumption, which are the two important metrics of interest in fog networks. The study proposes a novel container-based architecture with different fog nodes. Based on architecture, the study devises the deep-learning-Q-network based resource-allocation, which consists of various components to solve the problem. The parts are mobility controller, resource searching, and resource allocation, and task migration. Performance evaluation shows that the proposed architecture and schemes better perform existing studies in terms of application costs (energy and execution time) by 30%. |
---|---|
ISSN: | 0928-8910 1573-7535 |
DOI: | 10.1007/s10515-021-00318-6 |