Learning-Based Real-Time Aggregate Flexibility Provision and Scheduling of Electric Vehicles

It is recognized that large-scale electric vehicles (EVs) can be aggregated and behave as a controllable storage to provide flexibility for power systems. To provide high-quality services to both the system and EV users, it is critical to accurately estimate the aggregate flexibility of EVs, which i...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on smart grid 2024-11, Vol.15 (6), p.5840-5852
Hauptverfasser: Zhang, Mingyang, Yang, Hongrong, Xu, Yinliang, Sun, Hongbin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:It is recognized that large-scale electric vehicles (EVs) can be aggregated and behave as a controllable storage to provide flexibility for power systems. To provide high-quality services to both the system and EV users, it is critical to accurately estimate the aggregate flexibility of EVs, which is highly challenging due to the uncertainties from regulation signals and EV behaviors. Thus, this paper proposes a learning-based approach, which is model-free and online. First, the optimal aggregate flexibility provision problem is formulated as a Markov decision process (MDP). Moreover, a heuristic causal real-time scheduling policy is developed to allocate the disaggregated power to each EV. Then, a deep reinforcement learning (DRL) algorithm, which incorporates the idea of Munchausen reinforcement learning into Soft Actor-Critic method, is proposed to solve the MDP problem. In addition, an unsupervised learning stage is integrated in the algorithm for state-dimension reduction. Numerical simulation results based on the real-world dataset demonstrate that the proposed approach can effectively deal with multiple uncertainties and balance economy and user satisfaction, and its performance is superior to existing model-driven aggregation methods and DRL algorithms.
ISSN:1949-3053
1949-3061
DOI:10.1109/TSG.2024.3400968