Learning to Operate an Electric Vehicle Charging Station Considering Vehicle-Grid Integration
The rapid adoption of electric vehicles (EVs) calls for the widespread installation of EV charging stations. To maximize the profitability of charging stations, intelligent controllers that provide both charging and electric grid services are in great need. However, it is challenging to determine th...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on smart grid 2022-07, Vol.13 (4), p.3038-3048 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 3048 |
---|---|
container_issue | 4 |
container_start_page | 3038 |
container_title | IEEE transactions on smart grid |
container_volume | 13 |
creator | Ye, Zuzhao Gao, Yuanqi Yu, Nanpeng |
description | The rapid adoption of electric vehicles (EVs) calls for the widespread installation of EV charging stations. To maximize the profitability of charging stations, intelligent controllers that provide both charging and electric grid services are in great need. However, it is challenging to determine the optimal charging schedule due to the uncertain arrival time and charging demands of EVs. In this paper, we propose a novel centralized allocation and decentralized execution (CADE) reinforcement learning (RL) framework to maximize the charging station's profit. In the centralized allocation process, EVs are allocated to either the waiting or charging spots. In the decentralized execution process, each charger makes its own charging/discharging decision while learning the action-value functions from a shared replay memory. This CADE framework significantly improves the scalability and sample efficiency of the RL algorithm. Numerical results show that the proposed CADE framework is both computationally efficient and scalable, and significantly outperforms the baseline model predictive control (MPC). We also provide an in-depth analysis of the learned action-value function to explain the inner working of the reinforcement learning agent. |
doi_str_mv | 10.1109/TSG.2022.3165479 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2679393985</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9750198</ieee_id><sourcerecordid>2679393985</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-786ed0dbe7eff41ec8550da8adffc5af97c8cebfa68c48f6beba84877cd0c653</originalsourceid><addsrcrecordid>eNo9kE1rAjEQQENpoWK9F3oJ9Lw22Wy-jmWxKggelN5KyGYnGrG7NhsP_ffdreLMYYbhzQw8hJ4pmVJK9Nt2M5_mJM-njApeSH2HRlQXOmNE0Ptbz9kjmnTdgfTBGBO5HqGvFdjYhGaHU4vXJ4g2AbYNnh3BpRgc_oR9cEfA5d7G3cBtkk2hbXDZNl2oIQ6zK5TNY6jxskmwi__QE3rw9tjB5FrHaPsx25aLbLWeL8v3VeZyTVMmlYCa1BVI8L6g4BTnpLbK1t47br2WTjmovBXKFcqLCiqrCiWlq4kTnI3R6-XsKbY_Z-iSObTn2PQfTS6kZn2qgSIXysW26yJ4c4rh28ZfQ4kZNJpeoxk0mqvGfuXlshIA4IZryQnViv0BLzRv7A</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2679393985</pqid></control><display><type>article</type><title>Learning to Operate an Electric Vehicle Charging Station Considering Vehicle-Grid Integration</title><source>IEEE Electronic Library (IEL)</source><creator>Ye, Zuzhao ; Gao, Yuanqi ; Yu, Nanpeng</creator><creatorcontrib>Ye, Zuzhao ; Gao, Yuanqi ; Yu, Nanpeng</creatorcontrib><description>The rapid adoption of electric vehicles (EVs) calls for the widespread installation of EV charging stations. To maximize the profitability of charging stations, intelligent controllers that provide both charging and electric grid services are in great need. However, it is challenging to determine the optimal charging schedule due to the uncertain arrival time and charging demands of EVs. In this paper, we propose a novel centralized allocation and decentralized execution (CADE) reinforcement learning (RL) framework to maximize the charging station's profit. In the centralized allocation process, EVs are allocated to either the waiting or charging spots. In the decentralized execution process, each charger makes its own charging/discharging decision while learning the action-value functions from a shared replay memory. This CADE framework significantly improves the scalability and sample efficiency of the RL algorithm. Numerical results show that the proposed CADE framework is both computationally efficient and scalable, and significantly outperforms the baseline model predictive control (MPC). We also provide an in-depth analysis of the learned action-value function to explain the inner working of the reinforcement learning agent.</description><identifier>ISSN: 1949-3053</identifier><identifier>EISSN: 1949-3061</identifier><identifier>DOI: 10.1109/TSG.2022.3165479</identifier><identifier>CODEN: ITSGBQ</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Batteries ; charging station ; Charging stations ; Electric vehicle ; Electric vehicle charging ; Electric vehicles ; Energy states ; Learning ; Optimization ; Prediction algorithms ; Predictive control ; Profitability ; reinforcement learning ; Schedules ; vehicle-grid integration ; Vehicle-to-grid</subject><ispartof>IEEE transactions on smart grid, 2022-07, Vol.13 (4), p.3038-3048</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-786ed0dbe7eff41ec8550da8adffc5af97c8cebfa68c48f6beba84877cd0c653</citedby><cites>FETCH-LOGICAL-c291t-786ed0dbe7eff41ec8550da8adffc5af97c8cebfa68c48f6beba84877cd0c653</cites><orcidid>0000-0003-4078-6143 ; 0000-0002-0428-662X ; 0000-0001-5086-5465</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9750198$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27923,27924,54757</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9750198$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Ye, Zuzhao</creatorcontrib><creatorcontrib>Gao, Yuanqi</creatorcontrib><creatorcontrib>Yu, Nanpeng</creatorcontrib><title>Learning to Operate an Electric Vehicle Charging Station Considering Vehicle-Grid Integration</title><title>IEEE transactions on smart grid</title><addtitle>TSG</addtitle><description>The rapid adoption of electric vehicles (EVs) calls for the widespread installation of EV charging stations. To maximize the profitability of charging stations, intelligent controllers that provide both charging and electric grid services are in great need. However, it is challenging to determine the optimal charging schedule due to the uncertain arrival time and charging demands of EVs. In this paper, we propose a novel centralized allocation and decentralized execution (CADE) reinforcement learning (RL) framework to maximize the charging station's profit. In the centralized allocation process, EVs are allocated to either the waiting or charging spots. In the decentralized execution process, each charger makes its own charging/discharging decision while learning the action-value functions from a shared replay memory. This CADE framework significantly improves the scalability and sample efficiency of the RL algorithm. Numerical results show that the proposed CADE framework is both computationally efficient and scalable, and significantly outperforms the baseline model predictive control (MPC). We also provide an in-depth analysis of the learned action-value function to explain the inner working of the reinforcement learning agent.</description><subject>Algorithms</subject><subject>Batteries</subject><subject>charging station</subject><subject>Charging stations</subject><subject>Electric vehicle</subject><subject>Electric vehicle charging</subject><subject>Electric vehicles</subject><subject>Energy states</subject><subject>Learning</subject><subject>Optimization</subject><subject>Prediction algorithms</subject><subject>Predictive control</subject><subject>Profitability</subject><subject>reinforcement learning</subject><subject>Schedules</subject><subject>vehicle-grid integration</subject><subject>Vehicle-to-grid</subject><issn>1949-3053</issn><issn>1949-3061</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE1rAjEQQENpoWK9F3oJ9Lw22Wy-jmWxKggelN5KyGYnGrG7NhsP_ffdreLMYYbhzQw8hJ4pmVJK9Nt2M5_mJM-njApeSH2HRlQXOmNE0Ptbz9kjmnTdgfTBGBO5HqGvFdjYhGaHU4vXJ4g2AbYNnh3BpRgc_oR9cEfA5d7G3cBtkk2hbXDZNl2oIQ6zK5TNY6jxskmwi__QE3rw9tjB5FrHaPsx25aLbLWeL8v3VeZyTVMmlYCa1BVI8L6g4BTnpLbK1t47br2WTjmovBXKFcqLCiqrCiWlq4kTnI3R6-XsKbY_Z-iSObTn2PQfTS6kZn2qgSIXysW26yJ4c4rh28ZfQ4kZNJpeoxk0mqvGfuXlshIA4IZryQnViv0BLzRv7A</recordid><startdate>20220701</startdate><enddate>20220701</enddate><creator>Ye, Zuzhao</creator><creator>Gao, Yuanqi</creator><creator>Yu, Nanpeng</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>KR7</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0003-4078-6143</orcidid><orcidid>https://orcid.org/0000-0002-0428-662X</orcidid><orcidid>https://orcid.org/0000-0001-5086-5465</orcidid></search><sort><creationdate>20220701</creationdate><title>Learning to Operate an Electric Vehicle Charging Station Considering Vehicle-Grid Integration</title><author>Ye, Zuzhao ; Gao, Yuanqi ; Yu, Nanpeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-786ed0dbe7eff41ec8550da8adffc5af97c8cebfa68c48f6beba84877cd0c653</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Batteries</topic><topic>charging station</topic><topic>Charging stations</topic><topic>Electric vehicle</topic><topic>Electric vehicle charging</topic><topic>Electric vehicles</topic><topic>Energy states</topic><topic>Learning</topic><topic>Optimization</topic><topic>Prediction algorithms</topic><topic>Predictive control</topic><topic>Profitability</topic><topic>reinforcement learning</topic><topic>Schedules</topic><topic>vehicle-grid integration</topic><topic>Vehicle-to-grid</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ye, Zuzhao</creatorcontrib><creatorcontrib>Gao, Yuanqi</creatorcontrib><creatorcontrib>Yu, Nanpeng</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on smart grid</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ye, Zuzhao</au><au>Gao, Yuanqi</au><au>Yu, Nanpeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning to Operate an Electric Vehicle Charging Station Considering Vehicle-Grid Integration</atitle><jtitle>IEEE transactions on smart grid</jtitle><stitle>TSG</stitle><date>2022-07-01</date><risdate>2022</risdate><volume>13</volume><issue>4</issue><spage>3038</spage><epage>3048</epage><pages>3038-3048</pages><issn>1949-3053</issn><eissn>1949-3061</eissn><coden>ITSGBQ</coden><abstract>The rapid adoption of electric vehicles (EVs) calls for the widespread installation of EV charging stations. To maximize the profitability of charging stations, intelligent controllers that provide both charging and electric grid services are in great need. However, it is challenging to determine the optimal charging schedule due to the uncertain arrival time and charging demands of EVs. In this paper, we propose a novel centralized allocation and decentralized execution (CADE) reinforcement learning (RL) framework to maximize the charging station's profit. In the centralized allocation process, EVs are allocated to either the waiting or charging spots. In the decentralized execution process, each charger makes its own charging/discharging decision while learning the action-value functions from a shared replay memory. This CADE framework significantly improves the scalability and sample efficiency of the RL algorithm. Numerical results show that the proposed CADE framework is both computationally efficient and scalable, and significantly outperforms the baseline model predictive control (MPC). We also provide an in-depth analysis of the learned action-value function to explain the inner working of the reinforcement learning agent.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TSG.2022.3165479</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0003-4078-6143</orcidid><orcidid>https://orcid.org/0000-0002-0428-662X</orcidid><orcidid>https://orcid.org/0000-0001-5086-5465</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1949-3053 |
ispartof | IEEE transactions on smart grid, 2022-07, Vol.13 (4), p.3038-3048 |
issn | 1949-3053 1949-3061 |
language | eng |
recordid | cdi_proquest_journals_2679393985 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Batteries charging station Charging stations Electric vehicle Electric vehicle charging Electric vehicles Energy states Learning Optimization Prediction algorithms Predictive control Profitability reinforcement learning Schedules vehicle-grid integration Vehicle-to-grid |
title | Learning to Operate an Electric Vehicle Charging Station Considering Vehicle-Grid Integration |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T13%3A41%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20to%20Operate%20an%20Electric%20Vehicle%20Charging%20Station%20Considering%20Vehicle-Grid%20Integration&rft.jtitle=IEEE%20transactions%20on%20smart%20grid&rft.au=Ye,%20Zuzhao&rft.date=2022-07-01&rft.volume=13&rft.issue=4&rft.spage=3038&rft.epage=3048&rft.pages=3038-3048&rft.issn=1949-3053&rft.eissn=1949-3061&rft.coden=ITSGBQ&rft_id=info:doi/10.1109/TSG.2022.3165479&rft_dat=%3Cproquest_RIE%3E2679393985%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2679393985&rft_id=info:pmid/&rft_ieee_id=9750198&rfr_iscdi=true |