Multi-agent deep reinforcement learning for collaborative task offloading in mobile edge computing networks
As is widely accepted, mobile edge computing (MEC) is a promising technology to enable wireless devices (WDs) to process computation-intensive tasks. Due to the mutual influence among different WDs, collaborative task offloading is needed in multi-agent environments. In this paper, a multi-agent MEC...
Gespeichert in:
Veröffentlicht in: | Digital signal processing 2023-08, Vol.140, p.104127, Article 104127 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As is widely accepted, mobile edge computing (MEC) is a promising technology to enable wireless devices (WDs) to process computation-intensive tasks. Due to the mutual influence among different WDs, collaborative task offloading is needed in multi-agent environments. In this paper, a multi-agent MEC network with delay-sensitive and non-partitionable tasks is considered, as well as the load on MEC servers. The joint optimization problem of offloading decision and resource allocation is formulated to minimize the average delay. To realize the collaborative decision-making, a multi-agent deep reinforcement learning based algorithm is proposed based on the framework of centralized training and decentralized execution. The centralized deep neural networks (DNN) learn from the past experience and the WDs learn policies from the evaluation of the actions from the centralized DNNs. Based on the learned policies, WDs can make offloading decisions with only local information. Simulation results show that the proposed algorithm achieves near-optimal performance and has the advantage of high stability in varying multi-agent environments. |
---|---|
ISSN: | 1051-2004 1095-4333 |
DOI: | 10.1016/j.dsp.2023.104127 |