Resource Allocation for Delay-Sensitive Vehicle-to-Multi-Edges (V2Es) Communications in Vehicular Networks: A Multi-Agent Deep Reinforcement Learning Approach
The rapid development of internet of vehicles (IoV) has recently led to the emergence of diverse intelligent vehicular applications such as automatic driving, auto navigation, and advanced driver assistance, etc. However, the current vehicular communication framework, such as vehicle-to-vehicle (V2V...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on network science and engineering 2021-04, Vol.8 (2), p.1873-1886 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The rapid development of internet of vehicles (IoV) has recently led to the emergence of diverse intelligent vehicular applications such as automatic driving, auto navigation, and advanced driver assistance, etc. However, the current vehicular communication framework, such as vehicle-to-vehicle (V2V), vehicle-to-cloud (V2C), and vehicle-to-roadside infrastructure (V2I), still remain challenging in supporting these intelligent and delay-sensitive applications, due to its long communication latency or low computational capability. Besides that, the traditional vehicle network is prone to be unavailable because of the mobility with high-speed of the vehicles. To address these issues, this paper proposes a vehicle-to-multi-edges (V2Es) communication framework in vehicular networks. By utilizing the resource of edge nodes in the proximity, the emergency information or services of vehicles can be timely processed and completed, which improves the service quality of vehicles. Furthermore, we define a joint task offloading and edge caching problem, targeting optimizing both the latency of services and energy consumption of vehicles. Based on this, we propose a multi-agent reinforcement learning (RL) method to learn the dynamic communication status between vehicles and edge nodes, and make decisions on task offloading and edge caching. Finally, results of the simulation show that our proposal is able to learn the scheduling policy more quickly and effectively, and reduce the service latency by more than 10% on average. |
---|---|
ISSN: | 2327-4697 2334-329X |
DOI: | 10.1109/TNSE.2021.3075530 |