D3PG: Dirichlet DDPG for Task Partitioning and Offloading With Constrained Hybrid Action Space in Mobile-Edge Computing

Mobile-edge computing (MEC) has been regarded as a promising paradigm to reduce service latency for data processing in the Internet of Things (IoT) by provisioning computing resources at the network edges. In this work, we jointly optimize the task partitioning and computational power allocation for...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE internet of things journal 2022-10, Vol.9 (19), p.19260-19272
Hauptverfasser: Ale, Laha, King, Scott A., Zhang, Ning, Sattar, Abdul Rahman, Skandaraniyam, Janahan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Mobile-edge computing (MEC) has been regarded as a promising paradigm to reduce service latency for data processing in the Internet of Things (IoT) by provisioning computing resources at the network edges. In this work, we jointly optimize the task partitioning and computational power allocation for computation offloading in a dynamic environment with multiple IoT devices and multiple edge servers. We formulate the problem as a Markov decision process with constrained hybrid action space, which cannot be well handled by existing deep reinforcement learning (DRL) algorithms. Therefore, we develop a novel DRL called Dirichlet deep deterministic policy gradient (D3PG), which is built on deep deterministic policy gradient (DDPG) to solve the problem. The developed model can learn to solve multiobjective optimization, including maximizing the number of tasks processed before deadlines and minimizing the energy cost and service latency. More importantly, D3PG can effectively deal with a constrained distribution-continuous hybrid action spaces, where the distribution variables are for the task partitioning and offloading, while the continuous variables are for computational frequency control. Moreover, the D3PG can address many similar issues in MEC and general reinforcement learning problems. Extensive simulation results show that the proposed D3PG outperforms the state-of-the-art methods.
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2022.3166110