Deep Reinforcement Learning-Enabled Distributed Uniform Control for a DC Solid State Transformer in DC Microgrid

This article proposes a distributed uniform control approach for a dc solid state transformer (DCSST) that feeds constant power loads. The proposed approach utilizes a multiagent deep reinforcement learning (MADRL) technique to coordinate multiple control objectives. During the offline training stag...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on industrial electronics (1982) 2024-06, Vol.71 (6), p.1-12
Hauptverfasser: Zeng, Yu, Pou, Josep, Sun, Changjiang, Li, Xinze, Liang, Gaowen, Xia, Yang, Mukherjee, Suvajit, Gupta, Amit Kumar
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This article proposes a distributed uniform control approach for a dc solid state transformer (DCSST) that feeds constant power loads. The proposed approach utilizes a multiagent deep reinforcement learning (MADRL) technique to coordinate multiple control objectives. During the offline training stage, each DRL agent supervises a submodule (SM) of the DCSST, and outputs real-time actions based on the received states. Optimal phase-shift ratio combinations are learned using triple phase-shift modulation, and soft actor-critic (SAC) agents optimize the neural network parameters to enhance controller performance. The well-trained agents act as fast surrogate models that provide online control decisions for the DCSST, adapting to variant environmental conditions using only local SM information. The proposed distributed configuration improves redundancy and modularity, facilitating hot-swap experiments. Experimental results demonstrate the excellent performance of the proposed multiagent SAC algorithm.
ISSN:0278-0046
1557-9948
DOI:10.1109/TIE.2023.3294584