A matrix game‐based coordinated optimisation method of distribution networks with multiple flexible resources

The ‘Electric energy substitution’ project in rural areas of northern China aims to replace coal‐fired heating with electric heating. However, this could potentially lead to congestion problems on the local distribution networks (DNs) due to an increase in heating loads. Instead of expanding the lin...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Energy systems integration 2024-09, Vol.6 (3), p.283-296
Hauptverfasser: An, Yingdong, Liu, Yixin, Guo, Li, Li, Xinchen, Li, Xiangjun, Jia, Xuecui, Wang, Tengxin, Zhang, Min
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The ‘Electric energy substitution’ project in rural areas of northern China aims to replace coal‐fired heating with electric heating. However, this could potentially lead to congestion problems on the local distribution networks (DNs) due to an increase in heating loads. Instead of expanding the lines, flexible resources such as microgrids (MGs) and heat pumps (HPs) could provide auxiliary services in a more cost‐effective manner. A bi‐level optimisation model that coordinates DNs, MGs, and HP clusters is proposed to address this issue. In the lower level, MGs and HP clusters provide auxiliary services through competitive bidding to maximise their own income over multiple periods, considering a series of technical operational constraints and comfort constraints. In the upper level, DNs clear the market based on the bidding information to minimise its operational costs while guaranteeing network constraints. The bi‐level optimisation model is formulated as a multi‐agent matrix game problem, and the Win or Learn Fast‐Policy Hill‐climbing algorithm is used to achieve fast market equilibrium in a decentralised manner. Simulation results demonstrate that the proposed method can improve the revenue of MGs by up to 2.6 times compared to the single period matrix game method, and reduce the convergence time by up to 81.3% compared to the multi‐agent Q‐learning method.
ISSN:2516-8401
2516-8401
DOI:10.1049/esi2.12127