Efficient Mobility-Aware Task Offloading for Vehicular Edge Computing Networks

Vehicular networks are facing the challenges to support ubiquitous connections and high quality of service for numerous vehicles. To address these issues, mobile edge computing (MEC) is explored as a promising technology in vehicular networks by employing computing resources at the edge of vehicular...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2019, Vol.7, p.26652-26664
Hauptverfasser: Yang, Chao, Liu, Yi, Chen, Xin, Zhong, Weifeng, Xie, Shengli
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Vehicular networks are facing the challenges to support ubiquitous connections and high quality of service for numerous vehicles. To address these issues, mobile edge computing (MEC) is explored as a promising technology in vehicular networks by employing computing resources at the edge of vehicular wireless access networks. In this paper, we study the efficient task offloading schemes in vehicular edge computing networks. The vehicles perform the offloading time selection, communication, and computing resource allocations optimally, the mobility of vehicles and the maximum latency of tasks are considered. To minimize the system costs, including the costs of the required communication and computing resources, we first analyze the offloading schemes in the independent MEC servers scenario. The offloading tasks are processed by the MEC servers deployed at the access point (AP) independently. A mobility-aware task offloading scheme is proposed. Then, in the cooperative MEC servers scenario, the MEC servers can further offload the collected overloading tasks to the adjacent servers at the next AP on the vehicles' moving direction. A location-based offloading scheme is proposed. In both scenarios, the tradeoffs between the task completed latency and the required communication and computation resources are mainly considered. Numerical results show that our proposed schemes can reduce the system costs efficiently, while the latency constraints are satisfied.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2019.2900530