Reinforcement Learning Optimized Look-Ahead Energy Management of a Parallel Hybrid Electric Vehicle

This paper presents a predictive energy management strategy for a parallel hybrid electric vehicle (HEV) based on velocity prediction and reinforcement learning (RL). The design procedure starts with modeling the parallel HEV as a systematic control-oriented model and defining a cost function. Fuzzy...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE/ASME transactions on mechatronics 2017-08, Vol.22 (4), p.1497-1507
Hauptverfasser: Liu, Teng, Hu, Xiaosong, Li, Shengbo Eben, Cao, Dongpu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper presents a predictive energy management strategy for a parallel hybrid electric vehicle (HEV) based on velocity prediction and reinforcement learning (RL). The design procedure starts with modeling the parallel HEV as a systematic control-oriented model and defining a cost function. Fuzzy encoding and nearest neighbor approaches are proposed to achieve velocity prediction, and a finite-state Markov chain is exploited to learn transition probabilities of power demand. To determine the optimal control behaviors and power distribution between two energy sources, a novel RL-based energy management strategy is introduced. For comparison purposes, the two velocity prediction processes are examined by RL using the same realistic driving cycle. The look-ahead energy management strategy is contrasted with shortsighted and dynamic programming based counterparts, and further validated by hardware-in-the-loop test. The results demonstrate that the RL-optimized control is able to significantly reduce fuel consumption and computational time.
ISSN:1083-4435
1941-014X
DOI:10.1109/TMECH.2017.2707338