Dueling Double Deep Q Network Strategy in MEC for Smart Internet of Vehicles Edge Computing Networks
Advancing in communication systems requires nearby devices to act as networks when devices are not in use. Such technology is mobile edge computing, which provides enormous communication services in the network. In this research, we explore a multiuser smart Internet of Vehicles (IoV) network with m...
Gespeichert in:
Veröffentlicht in: | Journal of grid computing 2024-03, Vol.22 (1), p.37, Article 37 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Advancing in communication systems requires nearby devices to act as networks when devices are not in use. Such technology is mobile edge computing, which provides enormous communication services in the network. In this research, we explore a multiuser smart Internet of Vehicles (IoV) network with mobile edge computing (MEC) assistance, where the first edge server can assist in completing the intense computing jobs from the vehicular users. Many currently available works for MEC networks primarily concentrate on minimising system latency to ensure the quality of service (QoS) for users by designing some offloading strategies. Still, they need to account for the retail prices from the server and, as a result, the budgetary constraints of the users. To solve this problem, we present a Dueling Double Deep Q Network (D3QN) with an Optimal Stopping Theory (OST) strategy that helps to solve the multi-task joint edge problems and minimises the offloading problems in MEC-based IoV networks. The multi-task-offloading model aims to increase the likelihood of offloading to the ideal servers by utilising the OST characteristics. Lastly, simulators show how the proposed methods perform better than the traditional ones. The findings demonstrate that the suggested offloading techniques may be successfully applied in mobile nodes and significantly cut the anticipated time required to process the workloads. |
---|---|
ISSN: | 1570-7873 1572-9184 |
DOI: | 10.1007/s10723-024-09752-8 |