Delay Constrained Hybrid Task Offloading of Internet of Vehicle: A Deep Reinforcement Learning Method
The rapid development of the Internet of Things (IoTs) has driven the progress of intelligent transportation systems (ITS), which provides basic elements, such as vehicles, traffic lights, cameras, roadside units (RSUs) and their interconnected 5G communications, to constitute the Internet of vehicl...
Gespeichert in:
Veröffentlicht in: | IEEE access 2022, Vol.10, p.1-1 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The rapid development of the Internet of Things (IoTs) has driven the progress of intelligent transportation systems (ITS), which provides basic elements, such as vehicles, traffic lights, cameras, roadside units (RSUs) and their interconnected 5G communications, to constitute the Internet of vehicles (IoVs). In the IoVs, an intelligent vehicle can not only share information with the infrastructures like RSUs by vehicle to infrastructure (V2I) communication but also with vehicles on the road through vehicle-to-vehicle (V2V) communications.We thus expect that vehicles can collaborate with other well-resourced and idling vehicles, making full use of the wasted resources. However, existing approaches cannot achieve this goal due to the increasing strict delay constraints and the dynamic characteristics of the IoVs tasks. To improve the utilization of resources and perform better resource management, in this paper, we propose a hybrid task offloading scheme (HyTOS) based on deep reinforcement learning (DRL), which achieves the vehicle-to-edge (V2E) and V2V offloading by jointly considers the delay constraints and resource demand. To perform optimal offloading decision-making, we introduce a dynamic decision-making method, namely deep Q networks (DQN). To verify the effectiveness of this approach, we choose three baseline offloading approaches (one game theory-based and two single-scenario approaches) and perform a series of simulation experiments. The simulation results demonstrate that, compared to the baseline offloading approaches, our approach can effectively reduce task delay and energy consumption, achieving high-efficiency resource management. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2022.3206359 |