Actor Critic-based Multi Objective Reinforcement Learning for Multi Access Edge Computing

In recent times, large applications that need near real-time processing are increasingly being used on devices with limited resources. Multi access edge computing is a computing paradigm that provides a solution to this problem by placing servers as close to resource constrained devices as possible....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of advanced computer science & applications 2024, Vol.15 (2)
Hauptverfasser: Khot, Vishal, M, Vallisha, Pai, Sharan S, K, Chandra Shekar R, N, Kayarvizhy
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In recent times, large applications that need near real-time processing are increasingly being used on devices with limited resources. Multi access edge computing is a computing paradigm that provides a solution to this problem by placing servers as close to resource constrained devices as possible. However, the edge device must consider multiple conflicting objectives, viz., energy consumption, latency, task drop rate and quality of experience. Many previous approaches optimize on only one objective or a fixed linear combination of multiple objectives. These approaches don’t ensure best performance for applications that run on edge servers, as there is no guarantee that the solution obtained by these approaches lies on the pareto-front. In this work, Multi Objective Reinforcement Learning with Actor-Critic model is proposed to optimize the drop rate, latency and energy consumption parameters during offloading decision. The model is compared with MORL-Tabular, MORL-Deep Q Network and MORL-Double Deep Q Network models. The proposed model outperforms all the other models in terms of drop rate and latency.
ISSN:2158-107X
2156-5570
DOI:10.14569/IJACSA.2024.0150241