Energy management of a microgrid considering nonlinear losses in batteries through Deep Reinforcement Learning
The massive deployment of microgrids could play a significant role in achieving decarbonization of the electric sector amid the ongoing energy transition. The effective operation of these microgrids requires an Energy Management System (EMS), which establishes control set-points for all dispatchable...
Gespeichert in:
Veröffentlicht in: | Applied energy 2024-08, Vol.368, p.123435, Article 123435 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The massive deployment of microgrids could play a significant role in achieving decarbonization of the electric sector amid the ongoing energy transition. The effective operation of these microgrids requires an Energy Management System (EMS), which establishes control set-points for all dispatchable components. EMSs can be formulated as classical optimization problems or as Partially-Observable Markov Decision Processes (POMDPs). Recently, Deep Reinforcement Learning (DRL) algorithms have been employed to solve the latter, gaining popularity in recent years. Since DRL methods promise to deal effectively with nonlinear dynamics, this paper examines the Twin-Delayed Deep Deterministic Policy Gradient (TD3) performance – a state-of-the-art method in DRL – for the EMS of a microgrid that includes nonlinear battery losses. Furthermore, the classical EMS-microgrid interaction is improved by refining the behavior of the underlying control system to obtain reliable results. The performance of this novel approach has been tested on two distinct microgrids – a residential one and a larger-scale grid – with a satisfactory outcome beyond reducing operational costs. Findings demonstrate the intrinsic potential of DRL-based algorithms for enhancing energy management and driving more efficient power systems.
•Application of Deep Reinforcement Learning for microgrid energy management.•A nonlinear battery-loss model is used seeking a realistic approach.•A realistic control system is considered in the model for robust results.•Operational costs are reduced by 2% over previously reported work.•Energy losses in the battery are reduced by 10% in the scenarios simulated. |
---|---|
ISSN: | 0306-2619 1872-9118 |
DOI: | 10.1016/j.apenergy.2024.123435 |