Task Partition-Based Computation Offloading and Content Caching for Cloud–Edge Cooperation Networks
With the increasing complexity of applications, many delay-sensitive and compute-intensive services have posed significant challenges to mobile devices. Addressing how to efficiently allocate heterogeneous network resources to meet the computing and delay requirements of terminal services is a press...
Gespeichert in:
Veröffentlicht in: | Symmetry (Basel) 2024-07, Vol.16 (7), p.906 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the increasing complexity of applications, many delay-sensitive and compute-intensive services have posed significant challenges to mobile devices. Addressing how to efficiently allocate heterogeneous network resources to meet the computing and delay requirements of terminal services is a pressing issue. In this paper, a new cooperative twin delayed deep deterministic policy gradient and deep-Q network (TD3-DQN) algorithm is introduced to minimize system latency by optimizing computational offloading and caching placement asynchronously. Specifically, the task-partitioning technique divides computing tasks into multiple subtasks, reducing the response latency. A DQN intelligent algorithm is presented to optimize the offloading path to edge servers by perceiving network resource status. Furthermore, a TD3 approach is designed to optimize the cached content in the edge servers, ensuring dynamic popularity content requirements are met without excessive offload decisions. The simulation results demonstrate that the proposed model achieves lower latency and quicker convergence in asymmetrical cloud–edge collaborative networks compared to other benchmark algorithms. |
---|---|
ISSN: | 2073-8994 2073-8994 |
DOI: | 10.3390/sym16070906 |