Multi-Agent Deep Reinforcement Learning for Distributed Load Restoration

This paper addresses the load restoration problem after power outage events. Our primary proposed methodology is using multi-agent deep reinforcement learning to optimize the load restoration process in distribution systems, modeled as networked microgrids, via determining the optimal operational se...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on smart grid 2024-03, Vol.15 (2), p.1749-1760
Hauptverfasser: Vu, Linh, Vu, Tuyen, Vu, Thanh Long, Srivastava, Anurag
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1760
container_issue 2
container_start_page 1749
container_title IEEE transactions on smart grid
container_volume 15
creator Vu, Linh
Vu, Tuyen
Vu, Thanh Long
Srivastava, Anurag
description This paper addresses the load restoration problem after power outage events. Our primary proposed methodology is using multi-agent deep reinforcement learning to optimize the load restoration process in distribution systems, modeled as networked microgrids, via determining the optimal operational sequence of circuit breakers (switches). An innovative invalid action masking technique is incorporated into the multi-agent method to handle both the physical constraints in the restoration process and the curse of dimensionality as the action space of operational decisions grows exponentially with the number of circuit breakers. The features of our proposed method include centralized training for multi-agents to overcome non-stationary environment problems, decentralized execution to ease the deployment, and zero constraint violations to prevent harmful actions. Our simulations are performed in OpenDSS and Python environments to demonstrate the effectiveness of the proposed approach using the IEEE 13, 123, and 8500-node distribution test feeders. The results show that the proposed algorithm can achieve a significantly better learning curve and stability than the conventional methods.
doi_str_mv 10.1109/TSG.2023.3310893
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2929248640</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10236471</ieee_id><sourcerecordid>2929248640</sourcerecordid><originalsourceid>FETCH-LOGICAL-c292t-b87e972e66ac3e5d08f725a9bfcdb6716d788495705f2300f2d9b3487bdab7313</originalsourceid><addsrcrecordid>eNpNkMFLwzAUxoMoOObuHjwUPLcmeW3SHMemm1ARdJ5D0r6OjK2ZaXvwvzdjQ3yX9_j4vvfBj5B7RjPGqHrafK4yTjlkAIyWCq7IhKlcpUAFu_67C7gls77f0TgAILiakPXbuB9cOt9iNyRLxGPyga5rfajxcJIqNKFz3TaJUrJ0_RCcHQdsksqbJnr7wQczON_dkZvW7HucXfaUfL08bxbrtHpfvS7mVVpzxYfUlhKV5CiEqQGLhpat5IVRtq0bKyQTjSzLXBWSFi0HSlveKAt5KW1jrAQGU_J4_nsM_nuM_Xrnx9DFSh0LFM9LkdPoomdXHXzfB2z1MbiDCT-aUX1CpiMyfUKmL8hi5OEccYj4z85B5JLBL6UmZpU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2929248640</pqid></control><display><type>article</type><title>Multi-Agent Deep Reinforcement Learning for Distributed Load Restoration</title><source>IEEE Xplore</source><creator>Vu, Linh ; Vu, Tuyen ; Vu, Thanh Long ; Srivastava, Anurag</creator><creatorcontrib>Vu, Linh ; Vu, Tuyen ; Vu, Thanh Long ; Srivastava, Anurag</creatorcontrib><description>This paper addresses the load restoration problem after power outage events. Our primary proposed methodology is using multi-agent deep reinforcement learning to optimize the load restoration process in distribution systems, modeled as networked microgrids, via determining the optimal operational sequence of circuit breakers (switches). An innovative invalid action masking technique is incorporated into the multi-agent method to handle both the physical constraints in the restoration process and the curse of dimensionality as the action space of operational decisions grows exponentially with the number of circuit breakers. The features of our proposed method include centralized training for multi-agents to overcome non-stationary environment problems, decentralized execution to ease the deployment, and zero constraint violations to prevent harmful actions. Our simulations are performed in OpenDSS and Python environments to demonstrate the effectiveness of the proposed approach using the IEEE 13, 123, and 8500-node distribution test feeders. The results show that the proposed algorithm can achieve a significantly better learning curve and stability than the conventional methods.</description><identifier>ISSN: 1949-3053</identifier><identifier>EISSN: 1949-3061</identifier><identifier>DOI: 10.1109/TSG.2023.3310893</identifier><identifier>CODEN: ITSGBQ</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Circuit breakers ; Deep learning ; Deep reinforcement learning ; Distributed generation ; distribution systems ; invalid action masking ; Learning curves ; Linear programming ; Load modeling ; load restoration ; Manganese ; Microgrids ; multi-agent systems ; Multiagent systems ; networked microgrids ; Nonstationary environments ; OpenDSS ; Optimization ; Restoration ; Switches ; Training</subject><ispartof>IEEE transactions on smart grid, 2024-03, Vol.15 (2), p.1749-1760</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c292t-b87e972e66ac3e5d08f725a9bfcdb6716d788495705f2300f2d9b3487bdab7313</citedby><cites>FETCH-LOGICAL-c292t-b87e972e66ac3e5d08f725a9bfcdb6716d788495705f2300f2d9b3487bdab7313</cites><orcidid>0000-0002-0130-7042 ; 0000-0002-0952-4770 ; 0000-0003-3518-8018 ; 0000-0003-3140-2144</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10236471$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>315,781,785,797,27929,27930,54763</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10236471$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Vu, Linh</creatorcontrib><creatorcontrib>Vu, Tuyen</creatorcontrib><creatorcontrib>Vu, Thanh Long</creatorcontrib><creatorcontrib>Srivastava, Anurag</creatorcontrib><title>Multi-Agent Deep Reinforcement Learning for Distributed Load Restoration</title><title>IEEE transactions on smart grid</title><addtitle>TSG</addtitle><description>This paper addresses the load restoration problem after power outage events. Our primary proposed methodology is using multi-agent deep reinforcement learning to optimize the load restoration process in distribution systems, modeled as networked microgrids, via determining the optimal operational sequence of circuit breakers (switches). An innovative invalid action masking technique is incorporated into the multi-agent method to handle both the physical constraints in the restoration process and the curse of dimensionality as the action space of operational decisions grows exponentially with the number of circuit breakers. The features of our proposed method include centralized training for multi-agents to overcome non-stationary environment problems, decentralized execution to ease the deployment, and zero constraint violations to prevent harmful actions. Our simulations are performed in OpenDSS and Python environments to demonstrate the effectiveness of the proposed approach using the IEEE 13, 123, and 8500-node distribution test feeders. The results show that the proposed algorithm can achieve a significantly better learning curve and stability than the conventional methods.</description><subject>Algorithms</subject><subject>Circuit breakers</subject><subject>Deep learning</subject><subject>Deep reinforcement learning</subject><subject>Distributed generation</subject><subject>distribution systems</subject><subject>invalid action masking</subject><subject>Learning curves</subject><subject>Linear programming</subject><subject>Load modeling</subject><subject>load restoration</subject><subject>Manganese</subject><subject>Microgrids</subject><subject>multi-agent systems</subject><subject>Multiagent systems</subject><subject>networked microgrids</subject><subject>Nonstationary environments</subject><subject>OpenDSS</subject><subject>Optimization</subject><subject>Restoration</subject><subject>Switches</subject><subject>Training</subject><issn>1949-3053</issn><issn>1949-3061</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkMFLwzAUxoMoOObuHjwUPLcmeW3SHMemm1ARdJ5D0r6OjK2ZaXvwvzdjQ3yX9_j4vvfBj5B7RjPGqHrafK4yTjlkAIyWCq7IhKlcpUAFu_67C7gls77f0TgAILiakPXbuB9cOt9iNyRLxGPyga5rfajxcJIqNKFz3TaJUrJ0_RCcHQdsksqbJnr7wQczON_dkZvW7HucXfaUfL08bxbrtHpfvS7mVVpzxYfUlhKV5CiEqQGLhpat5IVRtq0bKyQTjSzLXBWSFi0HSlveKAt5KW1jrAQGU_J4_nsM_nuM_Xrnx9DFSh0LFM9LkdPoomdXHXzfB2z1MbiDCT-aUX1CpiMyfUKmL8hi5OEccYj4z85B5JLBL6UmZpU</recordid><startdate>20240301</startdate><enddate>20240301</enddate><creator>Vu, Linh</creator><creator>Vu, Tuyen</creator><creator>Vu, Thanh Long</creator><creator>Srivastava, Anurag</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>KR7</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-0130-7042</orcidid><orcidid>https://orcid.org/0000-0002-0952-4770</orcidid><orcidid>https://orcid.org/0000-0003-3518-8018</orcidid><orcidid>https://orcid.org/0000-0003-3140-2144</orcidid></search><sort><creationdate>20240301</creationdate><title>Multi-Agent Deep Reinforcement Learning for Distributed Load Restoration</title><author>Vu, Linh ; Vu, Tuyen ; Vu, Thanh Long ; Srivastava, Anurag</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c292t-b87e972e66ac3e5d08f725a9bfcdb6716d788495705f2300f2d9b3487bdab7313</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Circuit breakers</topic><topic>Deep learning</topic><topic>Deep reinforcement learning</topic><topic>Distributed generation</topic><topic>distribution systems</topic><topic>invalid action masking</topic><topic>Learning curves</topic><topic>Linear programming</topic><topic>Load modeling</topic><topic>load restoration</topic><topic>Manganese</topic><topic>Microgrids</topic><topic>multi-agent systems</topic><topic>Multiagent systems</topic><topic>networked microgrids</topic><topic>Nonstationary environments</topic><topic>OpenDSS</topic><topic>Optimization</topic><topic>Restoration</topic><topic>Switches</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Vu, Linh</creatorcontrib><creatorcontrib>Vu, Tuyen</creatorcontrib><creatorcontrib>Vu, Thanh Long</creatorcontrib><creatorcontrib>Srivastava, Anurag</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) Online</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on smart grid</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Vu, Linh</au><au>Vu, Tuyen</au><au>Vu, Thanh Long</au><au>Srivastava, Anurag</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-Agent Deep Reinforcement Learning for Distributed Load Restoration</atitle><jtitle>IEEE transactions on smart grid</jtitle><stitle>TSG</stitle><date>2024-03-01</date><risdate>2024</risdate><volume>15</volume><issue>2</issue><spage>1749</spage><epage>1760</epage><pages>1749-1760</pages><issn>1949-3053</issn><eissn>1949-3061</eissn><coden>ITSGBQ</coden><abstract>This paper addresses the load restoration problem after power outage events. Our primary proposed methodology is using multi-agent deep reinforcement learning to optimize the load restoration process in distribution systems, modeled as networked microgrids, via determining the optimal operational sequence of circuit breakers (switches). An innovative invalid action masking technique is incorporated into the multi-agent method to handle both the physical constraints in the restoration process and the curse of dimensionality as the action space of operational decisions grows exponentially with the number of circuit breakers. The features of our proposed method include centralized training for multi-agents to overcome non-stationary environment problems, decentralized execution to ease the deployment, and zero constraint violations to prevent harmful actions. Our simulations are performed in OpenDSS and Python environments to demonstrate the effectiveness of the proposed approach using the IEEE 13, 123, and 8500-node distribution test feeders. The results show that the proposed algorithm can achieve a significantly better learning curve and stability than the conventional methods.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TSG.2023.3310893</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-0130-7042</orcidid><orcidid>https://orcid.org/0000-0002-0952-4770</orcidid><orcidid>https://orcid.org/0000-0003-3518-8018</orcidid><orcidid>https://orcid.org/0000-0003-3140-2144</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1949-3053
ispartof IEEE transactions on smart grid, 2024-03, Vol.15 (2), p.1749-1760
issn 1949-3053
1949-3061
language eng
recordid cdi_proquest_journals_2929248640
source IEEE Xplore
subjects Algorithms
Circuit breakers
Deep learning
Deep reinforcement learning
Distributed generation
distribution systems
invalid action masking
Learning curves
Linear programming
Load modeling
load restoration
Manganese
Microgrids
multi-agent systems
Multiagent systems
networked microgrids
Nonstationary environments
OpenDSS
Optimization
Restoration
Switches
Training
title Multi-Agent Deep Reinforcement Learning for Distributed Load Restoration
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-13T22%3A46%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-Agent%20Deep%20Reinforcement%20Learning%20for%20Distributed%20Load%20Restoration&rft.jtitle=IEEE%20transactions%20on%20smart%20grid&rft.au=Vu,%20Linh&rft.date=2024-03-01&rft.volume=15&rft.issue=2&rft.spage=1749&rft.epage=1760&rft.pages=1749-1760&rft.issn=1949-3053&rft.eissn=1949-3061&rft.coden=ITSGBQ&rft_id=info:doi/10.1109/TSG.2023.3310893&rft_dat=%3Cproquest_RIE%3E2929248640%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2929248640&rft_id=info:pmid/&rft_ieee_id=10236471&rfr_iscdi=true