Emergency control methods for power systems based on improved deep reinforcement learning
In order to achieve fast and accurate transient stability analysis and emergency control, this paper proposes a transient stability emergency control method based on improved deep reinforcement learning. In order to fully explore the temporal and spatial variation trend of transient response, a mult...
Gespeichert in:
Veröffentlicht in: | Journal of physics. Conference series 2024-10, Vol.2858 (1), p.12035 |
---|---|
Hauptverfasser: | , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 1 |
container_start_page | 12035 |
container_title | Journal of physics. Conference series |
container_volume | 2858 |
creator | Zhang, Jie Zhu, Yihua Liang, Zhuohang Ma, Qinfeng Zhang, Qingqing Liu, Mingshun An, Su Pu, Qingxin Dai, Jiang |
description | In order to achieve fast and accurate transient stability analysis and emergency control, this paper proposes a transient stability emergency control method based on improved deep reinforcement learning. In order to fully explore the temporal and spatial variation trend of transient response, a multi-dimensional feature containing information such as transient situation energy is constructed, and the deep reinforcement learning model is transformed based on the time-space graph neural network. On this basis, an emergency control model is constructed, and the power grid knowledge is integrated into the emergency control decision-making scheme to reduce the exploration of invalid decision-making and improve the performance of the model. The effectiveness of the proposed method is verified in the IEEE-39 system. |
doi_str_mv | 10.1088/1742-6596/2858/1/012035 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3115066546</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3115066546</sourcerecordid><originalsourceid>FETCH-LOGICAL-c1695-a13b7da55ca6fade3069a5c17103ee149ecb93e0bdbe23519797f2c08a43c0cd3</originalsourceid><addsrcrecordid>eNqFkMFOwzAMhiMEEmPwDETiXJY0TdIe0TQY0iQucOAUpYk7Oq1JSTrQ3p5UReOIL7bl_7etD6FbSu4pKcsFlUWeCV6JRV7y1C4IzQnjZ2h2mpyf6rK8RFcx7ghhKeQMva86CFtw5oiNd0Pwe9zB8OFtxI0PuPffEHA8xgG6iGsdwWLvcNv1wX-l2gL0OEDrkthAB27Ae9DBtW57jS4avY9w85vn6O1x9bpcZ5uXp-flwyYzVFQ805TV0mrOjRaNtsCIqDQ3VFLCAGhRgakrBqS2NeSM00pWsskNKXXBDDGWzdHdtDe99HmAOKidPwSXTipGKSdC8EIklZxUJvgYAzSqD22nw1FRokaOaiSkRlpq5KiomjgmJ5ucre__Vv_n-gGuC3ZY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3115066546</pqid></control><display><type>article</type><title>Emergency control methods for power systems based on improved deep reinforcement learning</title><source>IOP Publishing Free Content</source><source>Institute of Physics IOPscience extra</source><source>EZB-FREE-00999 freely available EZB journals</source><source>Alma/SFX Local Collection</source><source>Free Full-Text Journals in Chemistry</source><creator>Zhang, Jie ; Zhu, Yihua ; Liang, Zhuohang ; Ma, Qinfeng ; Zhang, Qingqing ; Liu, Mingshun ; An, Su ; Pu, Qingxin ; Dai, Jiang</creator><creatorcontrib>Zhang, Jie ; Zhu, Yihua ; Liang, Zhuohang ; Ma, Qinfeng ; Zhang, Qingqing ; Liu, Mingshun ; An, Su ; Pu, Qingxin ; Dai, Jiang</creatorcontrib><description>In order to achieve fast and accurate transient stability analysis and emergency control, this paper proposes a transient stability emergency control method based on improved deep reinforcement learning. In order to fully explore the temporal and spatial variation trend of transient response, a multi-dimensional feature containing information such as transient situation energy is constructed, and the deep reinforcement learning model is transformed based on the time-space graph neural network. On this basis, an emergency control model is constructed, and the power grid knowledge is integrated into the emergency control decision-making scheme to reduce the exploration of invalid decision-making and improve the performance of the model. The effectiveness of the proposed method is verified in the IEEE-39 system.</description><identifier>ISSN: 1742-6588</identifier><identifier>EISSN: 1742-6596</identifier><identifier>DOI: 10.1088/1742-6596/2858/1/012035</identifier><language>eng</language><publisher>Bristol: IOP Publishing</publisher><subject>Control methods ; Control stability ; Decision making ; Deep learning ; Emergency response ; Graph neural networks ; Multidimensional methods ; Stability analysis ; Transient response ; Transient stability</subject><ispartof>Journal of physics. Conference series, 2024-10, Vol.2858 (1), p.12035</ispartof><rights>Published under licence by IOP Publishing Ltd</rights><rights>Published under licence by IOP Publishing Ltd. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c1695-a13b7da55ca6fade3069a5c17103ee149ecb93e0bdbe23519797f2c08a43c0cd3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://iopscience.iop.org/article/10.1088/1742-6596/2858/1/012035/pdf$$EPDF$$P50$$Giop$$Hfree_for_read</linktopdf><link.rule.ids>314,776,780,27903,27904,38847,38869,53819,53846</link.rule.ids></links><search><creatorcontrib>Zhang, Jie</creatorcontrib><creatorcontrib>Zhu, Yihua</creatorcontrib><creatorcontrib>Liang, Zhuohang</creatorcontrib><creatorcontrib>Ma, Qinfeng</creatorcontrib><creatorcontrib>Zhang, Qingqing</creatorcontrib><creatorcontrib>Liu, Mingshun</creatorcontrib><creatorcontrib>An, Su</creatorcontrib><creatorcontrib>Pu, Qingxin</creatorcontrib><creatorcontrib>Dai, Jiang</creatorcontrib><title>Emergency control methods for power systems based on improved deep reinforcement learning</title><title>Journal of physics. Conference series</title><addtitle>J. Phys.: Conf. Ser</addtitle><description>In order to achieve fast and accurate transient stability analysis and emergency control, this paper proposes a transient stability emergency control method based on improved deep reinforcement learning. In order to fully explore the temporal and spatial variation trend of transient response, a multi-dimensional feature containing information such as transient situation energy is constructed, and the deep reinforcement learning model is transformed based on the time-space graph neural network. On this basis, an emergency control model is constructed, and the power grid knowledge is integrated into the emergency control decision-making scheme to reduce the exploration of invalid decision-making and improve the performance of the model. The effectiveness of the proposed method is verified in the IEEE-39 system.</description><subject>Control methods</subject><subject>Control stability</subject><subject>Decision making</subject><subject>Deep learning</subject><subject>Emergency response</subject><subject>Graph neural networks</subject><subject>Multidimensional methods</subject><subject>Stability analysis</subject><subject>Transient response</subject><subject>Transient stability</subject><issn>1742-6588</issn><issn>1742-6596</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>O3W</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqFkMFOwzAMhiMEEmPwDETiXJY0TdIe0TQY0iQucOAUpYk7Oq1JSTrQ3p5UReOIL7bl_7etD6FbSu4pKcsFlUWeCV6JRV7y1C4IzQnjZ2h2mpyf6rK8RFcx7ghhKeQMva86CFtw5oiNd0Pwe9zB8OFtxI0PuPffEHA8xgG6iGsdwWLvcNv1wX-l2gL0OEDrkthAB27Ae9DBtW57jS4avY9w85vn6O1x9bpcZ5uXp-flwyYzVFQ805TV0mrOjRaNtsCIqDQ3VFLCAGhRgakrBqS2NeSM00pWsskNKXXBDDGWzdHdtDe99HmAOKidPwSXTipGKSdC8EIklZxUJvgYAzSqD22nw1FRokaOaiSkRlpq5KiomjgmJ5ucre__Vv_n-gGuC3ZY</recordid><startdate>20241001</startdate><enddate>20241001</enddate><creator>Zhang, Jie</creator><creator>Zhu, Yihua</creator><creator>Liang, Zhuohang</creator><creator>Ma, Qinfeng</creator><creator>Zhang, Qingqing</creator><creator>Liu, Mingshun</creator><creator>An, Su</creator><creator>Pu, Qingxin</creator><creator>Dai, Jiang</creator><general>IOP Publishing</general><scope>O3W</scope><scope>TSCCA</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>H8D</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope></search><sort><creationdate>20241001</creationdate><title>Emergency control methods for power systems based on improved deep reinforcement learning</title><author>Zhang, Jie ; Zhu, Yihua ; Liang, Zhuohang ; Ma, Qinfeng ; Zhang, Qingqing ; Liu, Mingshun ; An, Su ; Pu, Qingxin ; Dai, Jiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c1695-a13b7da55ca6fade3069a5c17103ee149ecb93e0bdbe23519797f2c08a43c0cd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Control methods</topic><topic>Control stability</topic><topic>Decision making</topic><topic>Deep learning</topic><topic>Emergency response</topic><topic>Graph neural networks</topic><topic>Multidimensional methods</topic><topic>Stability analysis</topic><topic>Transient response</topic><topic>Transient stability</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Jie</creatorcontrib><creatorcontrib>Zhu, Yihua</creatorcontrib><creatorcontrib>Liang, Zhuohang</creatorcontrib><creatorcontrib>Ma, Qinfeng</creatorcontrib><creatorcontrib>Zhang, Qingqing</creatorcontrib><creatorcontrib>Liu, Mingshun</creatorcontrib><creatorcontrib>An, Su</creatorcontrib><creatorcontrib>Pu, Qingxin</creatorcontrib><creatorcontrib>Dai, Jiang</creatorcontrib><collection>IOP Publishing Free Content</collection><collection>IOPscience (Open Access)</collection><collection>CrossRef</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Aerospace Database</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Journal of physics. Conference series</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Jie</au><au>Zhu, Yihua</au><au>Liang, Zhuohang</au><au>Ma, Qinfeng</au><au>Zhang, Qingqing</au><au>Liu, Mingshun</au><au>An, Su</au><au>Pu, Qingxin</au><au>Dai, Jiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Emergency control methods for power systems based on improved deep reinforcement learning</atitle><jtitle>Journal of physics. Conference series</jtitle><addtitle>J. Phys.: Conf. Ser</addtitle><date>2024-10-01</date><risdate>2024</risdate><volume>2858</volume><issue>1</issue><spage>12035</spage><pages>12035-</pages><issn>1742-6588</issn><eissn>1742-6596</eissn><abstract>In order to achieve fast and accurate transient stability analysis and emergency control, this paper proposes a transient stability emergency control method based on improved deep reinforcement learning. In order to fully explore the temporal and spatial variation trend of transient response, a multi-dimensional feature containing information such as transient situation energy is constructed, and the deep reinforcement learning model is transformed based on the time-space graph neural network. On this basis, an emergency control model is constructed, and the power grid knowledge is integrated into the emergency control decision-making scheme to reduce the exploration of invalid decision-making and improve the performance of the model. The effectiveness of the proposed method is verified in the IEEE-39 system.</abstract><cop>Bristol</cop><pub>IOP Publishing</pub><doi>10.1088/1742-6596/2858/1/012035</doi><tpages>13</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1742-6588 |
ispartof | Journal of physics. Conference series, 2024-10, Vol.2858 (1), p.12035 |
issn | 1742-6588 1742-6596 |
language | eng |
recordid | cdi_proquest_journals_3115066546 |
source | IOP Publishing Free Content; Institute of Physics IOPscience extra; EZB-FREE-00999 freely available EZB journals; Alma/SFX Local Collection; Free Full-Text Journals in Chemistry |
subjects | Control methods Control stability Decision making Deep learning Emergency response Graph neural networks Multidimensional methods Stability analysis Transient response Transient stability |
title | Emergency control methods for power systems based on improved deep reinforcement learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T21%3A28%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Emergency%20control%20methods%20for%20power%20systems%20based%20on%20improved%20deep%20reinforcement%20learning&rft.jtitle=Journal%20of%20physics.%20Conference%20series&rft.au=Zhang,%20Jie&rft.date=2024-10-01&rft.volume=2858&rft.issue=1&rft.spage=12035&rft.pages=12035-&rft.issn=1742-6588&rft.eissn=1742-6596&rft_id=info:doi/10.1088/1742-6596/2858/1/012035&rft_dat=%3Cproquest_cross%3E3115066546%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3115066546&rft_id=info:pmid/&rfr_iscdi=true |