A Multi-Agent Reinforcement Learning-Based Task-Offloading Strategy in a Blockchain-Enabled Edge Computing Network

In recent years, many mobile edge computing network solutions have enhanced data privacy and security and built a trusted network mechanism by introducing blockchain technology. However, this also complicates the task-offloading problem of blockchain-enabled mobile edge computing, and traditional ev...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Mathematics (Basel) 2024-07, Vol.12 (14), p.2264
Hauptverfasser: Liu, Chenlei, Sun, Zhixin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In recent years, many mobile edge computing network solutions have enhanced data privacy and security and built a trusted network mechanism by introducing blockchain technology. However, this also complicates the task-offloading problem of blockchain-enabled mobile edge computing, and traditional evolutionary learning and single-agent reinforcement learning algorithms are difficult to solve effectively. In this paper, we propose a blockchain-enabled mobile edge computing task-offloading strategy based on multi-agent reinforcement learning. First, we innovatively propose a blockchain-enabled mobile edge computing task-offloading model by comprehensively considering optimization objectives such as task execution energy consumption, processing delay, user privacy metrics, and blockchain incentive rewards. Then, we propose a deep reinforcement learning algorithm based on multiple agents sharing a global memory pool using the actor–critic architecture, which enables each agent to acquire the experience of another agent during the training process to enhance the collaborative capability among agents and overall performance. In addition, we adopt attenuatable Gaussian noise into the action space selection process in the actor network to avoid falling into the local optimum. Finally, experiments show that this scheme’s comprehensive cost calculation performance is enhanced by more than 10% compared with other multi-agent reinforcement learning algorithms. In addition, Gaussian random noise-based action space selection and a global memory pool improve the performance by 38.36% and 43.59%, respectively.
ISSN:2227-7390
2227-7390
DOI:10.3390/math12142264