Reinforcement learning based offloading and resource allocation for multi-intelligent vehicles in green edge-cloud computing

Green edge-cloud computing (GECC) collaborative service architecture has become one of the mainstream frameworks for real-time intensive multi-intelligent vehicle applications in intelligent transportation systems (ITS). In GECC systems, effective task offloading and resource allocation are critical...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer communications 2025-02, Vol.232, p.108051, Article 108051
Hauptverfasser: Li, Liying, Gao, Yifei, Xia, Peiwen, Lin, Sijie, Cong, Peijin, Zhou, Junlong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Green edge-cloud computing (GECC) collaborative service architecture has become one of the mainstream frameworks for real-time intensive multi-intelligent vehicle applications in intelligent transportation systems (ITS). In GECC systems, effective task offloading and resource allocation are critical to system performance and efficiency. Existing works on task offloading and resource allocation for multi-intelligent vehicles in GECC systems focus on designing static methods, which offload tasks once or a fixed number of times. This offloading manner may lead to low resource utilization due to congestion on edge servers and is not suitable for ITS with dynamically changing parameters such as bandwidth. To solve the above problems, we present a dynamic task offloading and resource allocation method, which allows tasks to be offloaded an arbitrary number of times under time and resource constraints. Specifically, we consider the characteristics of tasks and propose a remaining model to obtain the states of vehicles and tasks in real-time. Then we present a task offloading and resource allocation method considering both time and energy according to a designed real-time multi-agent deep deterministic policy gradient (RT-MADDPG) model. Our approach can offload tasks in arbitrary number of times under resource and time constraints, and can dynamically adjust the task offloading and resource allocation solutions according to changing system states to maximize system utility, which considers both task processing time and energy. Extensive simulation results indicate that the proposed RT-MADDPG method can effectively improve the utility of ITS compared to 2 benchmarking methods.
ISSN:0140-3664
DOI:10.1016/j.comcom.2025.108051