Deep Reinforcement Learning for Intelligent Internet of Vehicles: An Energy-Efficient Computational Offloading Scheme
The emerging vehicular services call for updated communication and computing platforms. Fog computing, whose infrastructure is deployed in close proximity to terminals, extends the facilities of cloud computing. However, due to the limitation of vehicular fog nodes, it is challenging to satisfy the...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on cognitive communications and networking 2019-12, Vol.5 (4), p.1060-1072 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The emerging vehicular services call for updated communication and computing platforms. Fog computing, whose infrastructure is deployed in close proximity to terminals, extends the facilities of cloud computing. However, due to the limitation of vehicular fog nodes, it is challenging to satisfy the quality of experiences of users, calling for intelligent networks with updated computing abilities. This paper constructs a three-layer offloading framework in intelligent Internet of Vehicles (IoV) to minimize the overall energy consumption while satisfying the delay constraint of users. Due to its high computational complexity, the formulated problem is decomposed into two parts: 1) flow redirection and 2) offloading decision. After that, a deep reinforcement learning-based scheme is put forward to solve the optimization problem. Performance evaluations based on real-world traces of taxis in Shanghai, China, demonstrate the effectiveness of our methods, where average energy consumption can be decreased by around 60% compared with the baseline algorithm. |
---|---|
ISSN: | 2332-7731 2332-7731 |
DOI: | 10.1109/TCCN.2019.2930521 |