Real-Time Optimal Power Flow With Linguistic Stipulations: Integrating GPT-Agent and Deep Reinforcement Learning
Practical operations of a power system need to comply with Grid Codes, which are usually grounded in linguistic stipulations that may be hard to quantify in conventional optimal power flow (OPF) models. In the lights of recent breakthrough in large language models (LLM), this letter proposes an OPF...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on power systems 2024-03, Vol.39 (2), p.4747-4750 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Practical operations of a power system need to comply with Grid Codes, which are usually grounded in linguistic stipulations that may be hard to quantify in conventional optimal power flow (OPF) models. In the lights of recent breakthrough in large language models (LLM), this letter proposes an OPF model with linguistic stipulations and a solution method by integrating the generative pre-trained transformer (GPT) based LLM agent in the primal-dual deep reinforcement learning (DRL) training loop. For the first time, conventionally unquantifiable linguistic stipulations expressed in natural language can be directly modeled as the objectives and constraints in the OPF problem. The GPT-agent is used to interpret satisfactions of linguistic stipulations as rewards and constraints. The non-differentiable rewards obtained by GPT-agent are optimized through interactions with environments in the DRL process. Once sufficiently trained, the DRL agent can solve the OPF model in real-time. The proposed method is demonstrated on the IEEE 118-bus system. |
---|---|
ISSN: | 0885-8950 1558-0679 |
DOI: | 10.1109/TPWRS.2023.3338961 |