Deep-Reinforcement-Learning-Based Autonomous Voltage Control for Power Grid Operations
In this letter, a novel autonomous control framework "Grid Mind" is proposed for the secure operation of power grids based on cutting-edge artificial intelligence (AI) technologies. The proposed platform provides a data-driven, model-free and closed-loop control agent trained using deep re...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on power systems 2020-01, Vol.35 (1), p.814-817 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 817 |
---|---|
container_issue | 1 |
container_start_page | 814 |
container_title | IEEE transactions on power systems |
container_volume | 35 |
creator | Duan, Jiajun Shi, Di Diao, Ruisheng Li, Haifeng Wang, Zhiwei Zhang, Bei Bian, Desong Yi, Zhehan |
description | In this letter, a novel autonomous control framework "Grid Mind" is proposed for the secure operation of power grids based on cutting-edge artificial intelligence (AI) technologies. The proposed platform provides a data-driven, model-free and closed-loop control agent trained using deep reinforcement learning (DRL) algorithms by interacting with massive simulations and/or real environment of a power grid. The proposed agent learns from scratch to master the power grid voltage control problem purely from data. It can make autonomous voltage control (AVC) strategies to support grid operators in making effective and timely control actions, according to the current system conditions detected by real-time measurements from supervisory control and data acquisition (SCADA) or phasor measurement units (PMUs). Two state-of-the-art DRL algorithms, namely deep Q-network (DQN) and deep deterministic policy gradient (DDPG), are proposed to formulate the AVC problem with performance compared. Case studies on a realistic 200-bus test system demonstrate the effectiveness and promising performance of the proposed framework. |
doi_str_mv | 10.1109/TPWRS.2019.2941134 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2339345316</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8834806</ieee_id><sourcerecordid>2339345316</sourcerecordid><originalsourceid>FETCH-LOGICAL-c410t-1f215068db24cb06662fcbdea21a040e4db1681fc8d689292f2fed8f36368d0e3</originalsourceid><addsrcrecordid>eNo9kMtOwzAQRS0EEuXxA7CJxNrFrxh7WQoUpEqtSilLy0nGVarWDnYixN-T0orVLOaeO6OD0A0lQ0qJvl_OPxfvQ0aoHjItKOXiBA1onitM5IM-RQOiVI6Vzsk5ukhpQwiR_WKAVk8ADV5A7V2IJezAt3gKNvrar_GjTVBlo64NPuxCl7JV2LZ2Ddk4-DaGbdYz2Tx8Q8wmsa6yWQPRtnXw6QqdObtNcH2cl-jj5Xk5fsXT2eRtPJriUlDSYuoYzYlUVcFEWfQvSebKogLLqCWCgKgKKhV1paqk0kwzxxxUynHJe4gAv0R3h94mhq8OUms2oYu-P2kY55qLnFPZp9ghVcaQUgRnmljvbPwxlJi9P_Pnz-z9maO_Hro9QDUA_ANKcaGI5L-Ao2zj</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2339345316</pqid></control><display><type>article</type><title>Deep-Reinforcement-Learning-Based Autonomous Voltage Control for Power Grid Operations</title><source>IEEE Electronic Library (IEL)</source><creator>Duan, Jiajun ; Shi, Di ; Diao, Ruisheng ; Li, Haifeng ; Wang, Zhiwei ; Zhang, Bei ; Bian, Desong ; Yi, Zhehan</creator><creatorcontrib>Duan, Jiajun ; Shi, Di ; Diao, Ruisheng ; Li, Haifeng ; Wang, Zhiwei ; Zhang, Bei ; Bian, Desong ; Yi, Zhehan</creatorcontrib><description>In this letter, a novel autonomous control framework "Grid Mind" is proposed for the secure operation of power grids based on cutting-edge artificial intelligence (AI) technologies. The proposed platform provides a data-driven, model-free and closed-loop control agent trained using deep reinforcement learning (DRL) algorithms by interacting with massive simulations and/or real environment of a power grid. The proposed agent learns from scratch to master the power grid voltage control problem purely from data. It can make autonomous voltage control (AVC) strategies to support grid operators in making effective and timely control actions, according to the current system conditions detected by real-time measurements from supervisory control and data acquisition (SCADA) or phasor measurement units (PMUs). Two state-of-the-art DRL algorithms, namely deep Q-network (DQN) and deep deterministic policy gradient (DDPG), are proposed to formulate the AVC problem with performance compared. Case studies on a realistic 200-bus test system demonstrate the effectiveness and promising performance of the proposed framework.</description><identifier>ISSN: 0885-8950</identifier><identifier>EISSN: 1558-0679</identifier><identifier>DOI: 10.1109/TPWRS.2019.2941134</identifier><identifier>CODEN: ITPSEG</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; Artificial intelligence ; Automatic voltage control ; autonomous voltage control ; Computer simulation ; DDPG ; deep reinforcement learning ; DQN ; Electric potential ; Generators ; Grid Mind ; Machine learning ; Measuring instruments ; PMU ; Power grids ; Supervisory control and data acquisition ; Training ; Voltage</subject><ispartof>IEEE transactions on power systems, 2020-01, Vol.35 (1), p.814-817</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c410t-1f215068db24cb06662fcbdea21a040e4db1681fc8d689292f2fed8f36368d0e3</citedby><cites>FETCH-LOGICAL-c410t-1f215068db24cb06662fcbdea21a040e4db1681fc8d689292f2fed8f36368d0e3</cites><orcidid>0000-0002-8009-227X ; 0000-0002-1599-7961 ; 0000-0003-4866-7499 ; 0000-0001-7041-0978 ; 0000-0002-1469-7887</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8834806$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8834806$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Duan, Jiajun</creatorcontrib><creatorcontrib>Shi, Di</creatorcontrib><creatorcontrib>Diao, Ruisheng</creatorcontrib><creatorcontrib>Li, Haifeng</creatorcontrib><creatorcontrib>Wang, Zhiwei</creatorcontrib><creatorcontrib>Zhang, Bei</creatorcontrib><creatorcontrib>Bian, Desong</creatorcontrib><creatorcontrib>Yi, Zhehan</creatorcontrib><title>Deep-Reinforcement-Learning-Based Autonomous Voltage Control for Power Grid Operations</title><title>IEEE transactions on power systems</title><addtitle>TPWRS</addtitle><description>In this letter, a novel autonomous control framework "Grid Mind" is proposed for the secure operation of power grids based on cutting-edge artificial intelligence (AI) technologies. The proposed platform provides a data-driven, model-free and closed-loop control agent trained using deep reinforcement learning (DRL) algorithms by interacting with massive simulations and/or real environment of a power grid. The proposed agent learns from scratch to master the power grid voltage control problem purely from data. It can make autonomous voltage control (AVC) strategies to support grid operators in making effective and timely control actions, according to the current system conditions detected by real-time measurements from supervisory control and data acquisition (SCADA) or phasor measurement units (PMUs). Two state-of-the-art DRL algorithms, namely deep Q-network (DQN) and deep deterministic policy gradient (DDPG), are proposed to formulate the AVC problem with performance compared. Case studies on a realistic 200-bus test system demonstrate the effectiveness and promising performance of the proposed framework.</description><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>Automatic voltage control</subject><subject>autonomous voltage control</subject><subject>Computer simulation</subject><subject>DDPG</subject><subject>deep reinforcement learning</subject><subject>DQN</subject><subject>Electric potential</subject><subject>Generators</subject><subject>Grid Mind</subject><subject>Machine learning</subject><subject>Measuring instruments</subject><subject>PMU</subject><subject>Power grids</subject><subject>Supervisory control and data acquisition</subject><subject>Training</subject><subject>Voltage</subject><issn>0885-8950</issn><issn>1558-0679</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kMtOwzAQRS0EEuXxA7CJxNrFrxh7WQoUpEqtSilLy0nGVarWDnYixN-T0orVLOaeO6OD0A0lQ0qJvl_OPxfvQ0aoHjItKOXiBA1onitM5IM-RQOiVI6Vzsk5ukhpQwiR_WKAVk8ADV5A7V2IJezAt3gKNvrar_GjTVBlo64NPuxCl7JV2LZ2Ddk4-DaGbdYz2Tx8Q8wmsa6yWQPRtnXw6QqdObtNcH2cl-jj5Xk5fsXT2eRtPJriUlDSYuoYzYlUVcFEWfQvSebKogLLqCWCgKgKKhV1paqk0kwzxxxUynHJe4gAv0R3h94mhq8OUms2oYu-P2kY55qLnFPZp9ghVcaQUgRnmljvbPwxlJi9P_Pnz-z9maO_Hro9QDUA_ANKcaGI5L-Ao2zj</recordid><startdate>202001</startdate><enddate>202001</enddate><creator>Duan, Jiajun</creator><creator>Shi, Di</creator><creator>Diao, Ruisheng</creator><creator>Li, Haifeng</creator><creator>Wang, Zhiwei</creator><creator>Zhang, Bei</creator><creator>Bian, Desong</creator><creator>Yi, Zhehan</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>KR7</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-8009-227X</orcidid><orcidid>https://orcid.org/0000-0002-1599-7961</orcidid><orcidid>https://orcid.org/0000-0003-4866-7499</orcidid><orcidid>https://orcid.org/0000-0001-7041-0978</orcidid><orcidid>https://orcid.org/0000-0002-1469-7887</orcidid></search><sort><creationdate>202001</creationdate><title>Deep-Reinforcement-Learning-Based Autonomous Voltage Control for Power Grid Operations</title><author>Duan, Jiajun ; Shi, Di ; Diao, Ruisheng ; Li, Haifeng ; Wang, Zhiwei ; Zhang, Bei ; Bian, Desong ; Yi, Zhehan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c410t-1f215068db24cb06662fcbdea21a040e4db1681fc8d689292f2fed8f36368d0e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>Automatic voltage control</topic><topic>autonomous voltage control</topic><topic>Computer simulation</topic><topic>DDPG</topic><topic>deep reinforcement learning</topic><topic>DQN</topic><topic>Electric potential</topic><topic>Generators</topic><topic>Grid Mind</topic><topic>Machine learning</topic><topic>Measuring instruments</topic><topic>PMU</topic><topic>Power grids</topic><topic>Supervisory control and data acquisition</topic><topic>Training</topic><topic>Voltage</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Duan, Jiajun</creatorcontrib><creatorcontrib>Shi, Di</creatorcontrib><creatorcontrib>Diao, Ruisheng</creatorcontrib><creatorcontrib>Li, Haifeng</creatorcontrib><creatorcontrib>Wang, Zhiwei</creatorcontrib><creatorcontrib>Zhang, Bei</creatorcontrib><creatorcontrib>Bian, Desong</creatorcontrib><creatorcontrib>Yi, Zhehan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on power systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Duan, Jiajun</au><au>Shi, Di</au><au>Diao, Ruisheng</au><au>Li, Haifeng</au><au>Wang, Zhiwei</au><au>Zhang, Bei</au><au>Bian, Desong</au><au>Yi, Zhehan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep-Reinforcement-Learning-Based Autonomous Voltage Control for Power Grid Operations</atitle><jtitle>IEEE transactions on power systems</jtitle><stitle>TPWRS</stitle><date>2020-01</date><risdate>2020</risdate><volume>35</volume><issue>1</issue><spage>814</spage><epage>817</epage><pages>814-817</pages><issn>0885-8950</issn><eissn>1558-0679</eissn><coden>ITPSEG</coden><abstract>In this letter, a novel autonomous control framework "Grid Mind" is proposed for the secure operation of power grids based on cutting-edge artificial intelligence (AI) technologies. The proposed platform provides a data-driven, model-free and closed-loop control agent trained using deep reinforcement learning (DRL) algorithms by interacting with massive simulations and/or real environment of a power grid. The proposed agent learns from scratch to master the power grid voltage control problem purely from data. It can make autonomous voltage control (AVC) strategies to support grid operators in making effective and timely control actions, according to the current system conditions detected by real-time measurements from supervisory control and data acquisition (SCADA) or phasor measurement units (PMUs). Two state-of-the-art DRL algorithms, namely deep Q-network (DQN) and deep deterministic policy gradient (DDPG), are proposed to formulate the AVC problem with performance compared. Case studies on a realistic 200-bus test system demonstrate the effectiveness and promising performance of the proposed framework.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TPWRS.2019.2941134</doi><tpages>4</tpages><orcidid>https://orcid.org/0000-0002-8009-227X</orcidid><orcidid>https://orcid.org/0000-0002-1599-7961</orcidid><orcidid>https://orcid.org/0000-0003-4866-7499</orcidid><orcidid>https://orcid.org/0000-0001-7041-0978</orcidid><orcidid>https://orcid.org/0000-0002-1469-7887</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0885-8950 |
ispartof | IEEE transactions on power systems, 2020-01, Vol.35 (1), p.814-817 |
issn | 0885-8950 1558-0679 |
language | eng |
recordid | cdi_proquest_journals_2339345316 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Artificial intelligence Automatic voltage control autonomous voltage control Computer simulation DDPG deep reinforcement learning DQN Electric potential Generators Grid Mind Machine learning Measuring instruments PMU Power grids Supervisory control and data acquisition Training Voltage |
title | Deep-Reinforcement-Learning-Based Autonomous Voltage Control for Power Grid Operations |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T23%3A13%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep-Reinforcement-Learning-Based%20Autonomous%20Voltage%20Control%20for%20Power%20Grid%20Operations&rft.jtitle=IEEE%20transactions%20on%20power%20systems&rft.au=Duan,%20Jiajun&rft.date=2020-01&rft.volume=35&rft.issue=1&rft.spage=814&rft.epage=817&rft.pages=814-817&rft.issn=0885-8950&rft.eissn=1558-0679&rft.coden=ITPSEG&rft_id=info:doi/10.1109/TPWRS.2019.2941134&rft_dat=%3Cproquest_RIE%3E2339345316%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2339345316&rft_id=info:pmid/&rft_ieee_id=8834806&rfr_iscdi=true |