Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy
•Multi-agent dueling double deep Q-network optimization framework was established.•Prioritized experience replay and feasible action screening mechanism was applied.•Value-decomposition network was used to solve multi-agent reliability problems.•Simulation results prove the economy and doability of...
Gespeichert in:
Veröffentlicht in: | Applied energy 2022-04, Vol.312, p.118724, Article 118724 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •Multi-agent dueling double deep Q-network optimization framework was established.•Prioritized experience replay and feasible action screening mechanism was applied.•Value-decomposition network was used to solve multi-agent reliability problems.•Simulation results prove the economy and doability of the method.
Under the background of high global building energy consumption, meeting the ever-growing energy consumption demand of building energy system (BES) through renewable energy is one of the effective ways to promote the clean transformation of global energy structure and achieve “carbon neutrality”. However, with the introduction of renewable energy, BES control becomes more complicated. The mismatch between supply and demand sides limits the further growth of renewable energy consumption, which is caused by fluctuation of renewable energy and randomness of load. Therefore, it is challenging to develop an efficient framework to realize the cooperative control of various controlled objects in supply and demand sides. To address this challenge, a multi-agent deep reinforcement learning framework was proposed to optimize the energy management of the building. In this paper, a dueling double deep Q-network was used for optimization of single agent, and value-decomposition network was put forward to solve the cooperation optimization of multiple agents. Also, considering the controlled characteristics of BES, prioritized experience replay and feasible action screening mechanism were introduced to accelerate the convergence and maintain stability of the algorithm applied to BES. Simulation results show that, the multi-agent cooperation algorithm can realize the control of variously different devices at the same time and achieve multi-objective cooperation optimization of BES. Moreover, the proposed approach reduced the uncomfortable duration by 84%, the unconsumed amount of renewable energy by 43%, and the energy cost by 8% compared with the conventional rule-based control approach. |
---|---|
ISSN: | 0306-2619 1872-9118 |
DOI: | 10.1016/j.apenergy.2022.118724 |