Reinforcement Learning With Data Envelopment Analysis and Conditional Value-At-Risk for the Capacity Expansion Problem
The capacity expansion problem is solved by accurately measuring the existing demand-supply mismatch and controlling the emissions output, considering multiple objectives, specific constraints, resource diversity, and resource allocation. This article proposes a reinforcement learning (RL) framework...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on engineering management 2024, Vol.71, p.1-12 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 12 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE transactions on engineering management |
container_volume | 71 |
creator | Lee, Chia-Yen Chen, Yen-Wen |
description | The capacity expansion problem is solved by accurately measuring the existing demand-supply mismatch and controlling the emissions output, considering multiple objectives, specific constraints, resource diversity, and resource allocation. This article proposes a reinforcement learning (RL) framework embedded with data envelopment analysis (DEA) to generate the optimal policy and guide the productivity improvement. The proposed framework uses DEA to evaluate efficiency and effectiveness for reward estimation in RL, and also assesses conditional value-at-risk to characterize the risk-averse capacity decision. Instead of focusing on short-term fluctuations in demand, RL optimizes the expected future reward with sequential capacity decisions over time. An empirical study of U.S. power generation validates the proposed framework and provides the managerial implications to policy makers. The results show that the RL agent can successfully learn the optimal policy through observing the interactions between the agent and the environment, and suggest the capacity adjustment that can improve efficiency by 8.3% and effectiveness by 0.9%. We conclude that RL complements productivity analysis, and emphasizes ex-ante planning over ex-post evaluation. |
doi_str_mv | 10.1109/TEM.2023.3264566 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_10104118</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10104118</ieee_id><sourcerecordid>3033620858</sourcerecordid><originalsourceid>FETCH-LOGICAL-c292t-e394cfc9eeff606e5c9026a6aec83c29cf7ad43fabc7bcdfacfa5f6cc0a3a9303</originalsourceid><addsrcrecordid>eNpNkEtPwzAQhC0EEuVx58DBEueUdZyY5FiV8pCKQFWBY7R112BInWC7iP57DOXAabU734xWw9iJgKEQUJ_PJ3fDHHI5lLkqSqV22ECUZZUBFLDLBgCiympZi312EMJbWosyhwH7nJF1pvOaVuQinxJ6Z90Lf7bxlV9iRD5xn9R2_a88cthugg0c3ZKPO7e00Xbpxp-wXVM2itnMhnee8nh8JT7GHrWNGz756tGFhPIH3y1aWh2xPYNtoOO_ecgerybz8U02vb--HY-mmc7rPGYk60IbXRMZo0BRqWvIFSokXcmEaHOBy0IaXOiLhV4a1AZLo7QGlFhLkIfsbJvb--5jTSE2b93ap49Dk1SpcqjKKlGwpbTvQvBkmt7bFfpNI6D5abdJ7TY_7TZ_7SbL6dZiiegfLqAQopLfWAV4-w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3033620858</pqid></control><display><type>article</type><title>Reinforcement Learning With Data Envelopment Analysis and Conditional Value-At-Risk for the Capacity Expansion Problem</title><source>IEEE Electronic Library (IEL)</source><creator>Lee, Chia-Yen ; Chen, Yen-Wen</creator><creatorcontrib>Lee, Chia-Yen ; Chen, Yen-Wen</creatorcontrib><description>The capacity expansion problem is solved by accurately measuring the existing demand-supply mismatch and controlling the emissions output, considering multiple objectives, specific constraints, resource diversity, and resource allocation. This article proposes a reinforcement learning (RL) framework embedded with data envelopment analysis (DEA) to generate the optimal policy and guide the productivity improvement. The proposed framework uses DEA to evaluate efficiency and effectiveness for reward estimation in RL, and also assesses conditional value-at-risk to characterize the risk-averse capacity decision. Instead of focusing on short-term fluctuations in demand, RL optimizes the expected future reward with sequential capacity decisions over time. An empirical study of U.S. power generation validates the proposed framework and provides the managerial implications to policy makers. The results show that the RL agent can successfully learn the optimal policy through observing the interactions between the agent and the environment, and suggest the capacity adjustment that can improve efficiency by 8.3% and effectiveness by 0.9%. We conclude that RL complements productivity analysis, and emphasizes ex-ante planning over ex-post evaluation.</description><identifier>ISSN: 0018-9391</identifier><identifier>EISSN: 1558-0040</identifier><identifier>DOI: 10.1109/TEM.2023.3264566</identifier><identifier>CODEN: IEEMA4</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Capacity expansion ; Capacity planning ; conditional value-at-risk (CVAR) ; Costs ; Data analysis ; Data envelopment analysis ; data envelopment analysis (DEA) ; Effectiveness ; efficiency and effectiveness measure ; Empirical analysis ; Evaluation ; Indexes ; Machine learning ; Optimization ; Power generation ; Productivity ; reinforcement learning (RL) ; Resource allocation ; Risk aversion ; risk-averse decision ; Uncertainty</subject><ispartof>IEEE transactions on engineering management, 2024, Vol.71, p.1-12</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c292t-e394cfc9eeff606e5c9026a6aec83c29cf7ad43fabc7bcdfacfa5f6cc0a3a9303</citedby><cites>FETCH-LOGICAL-c292t-e394cfc9eeff606e5c9026a6aec83c29cf7ad43fabc7bcdfacfa5f6cc0a3a9303</cites><orcidid>0000-0002-2928-3337 ; 0009-0002-6789-5331</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10104118$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,4010,27900,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10104118$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Lee, Chia-Yen</creatorcontrib><creatorcontrib>Chen, Yen-Wen</creatorcontrib><title>Reinforcement Learning With Data Envelopment Analysis and Conditional Value-At-Risk for the Capacity Expansion Problem</title><title>IEEE transactions on engineering management</title><addtitle>TEM</addtitle><description>The capacity expansion problem is solved by accurately measuring the existing demand-supply mismatch and controlling the emissions output, considering multiple objectives, specific constraints, resource diversity, and resource allocation. This article proposes a reinforcement learning (RL) framework embedded with data envelopment analysis (DEA) to generate the optimal policy and guide the productivity improvement. The proposed framework uses DEA to evaluate efficiency and effectiveness for reward estimation in RL, and also assesses conditional value-at-risk to characterize the risk-averse capacity decision. Instead of focusing on short-term fluctuations in demand, RL optimizes the expected future reward with sequential capacity decisions over time. An empirical study of U.S. power generation validates the proposed framework and provides the managerial implications to policy makers. The results show that the RL agent can successfully learn the optimal policy through observing the interactions between the agent and the environment, and suggest the capacity adjustment that can improve efficiency by 8.3% and effectiveness by 0.9%. We conclude that RL complements productivity analysis, and emphasizes ex-ante planning over ex-post evaluation.</description><subject>Capacity expansion</subject><subject>Capacity planning</subject><subject>conditional value-at-risk (CVAR)</subject><subject>Costs</subject><subject>Data analysis</subject><subject>Data envelopment analysis</subject><subject>data envelopment analysis (DEA)</subject><subject>Effectiveness</subject><subject>efficiency and effectiveness measure</subject><subject>Empirical analysis</subject><subject>Evaluation</subject><subject>Indexes</subject><subject>Machine learning</subject><subject>Optimization</subject><subject>Power generation</subject><subject>Productivity</subject><subject>reinforcement learning (RL)</subject><subject>Resource allocation</subject><subject>Risk aversion</subject><subject>risk-averse decision</subject><subject>Uncertainty</subject><issn>0018-9391</issn><issn>1558-0040</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkEtPwzAQhC0EEuVx58DBEueUdZyY5FiV8pCKQFWBY7R112BInWC7iP57DOXAabU734xWw9iJgKEQUJ_PJ3fDHHI5lLkqSqV22ECUZZUBFLDLBgCiympZi312EMJbWosyhwH7nJF1pvOaVuQinxJ6Z90Lf7bxlV9iRD5xn9R2_a88cthugg0c3ZKPO7e00Xbpxp-wXVM2itnMhnee8nh8JT7GHrWNGz756tGFhPIH3y1aWh2xPYNtoOO_ecgerybz8U02vb--HY-mmc7rPGYk60IbXRMZo0BRqWvIFSokXcmEaHOBy0IaXOiLhV4a1AZLo7QGlFhLkIfsbJvb--5jTSE2b93ap49Dk1SpcqjKKlGwpbTvQvBkmt7bFfpNI6D5abdJ7TY_7TZ_7SbL6dZiiegfLqAQopLfWAV4-w</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Lee, Chia-Yen</creator><creator>Chen, Yen-Wen</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-2928-3337</orcidid><orcidid>https://orcid.org/0009-0002-6789-5331</orcidid></search><sort><creationdate>2024</creationdate><title>Reinforcement Learning With Data Envelopment Analysis and Conditional Value-At-Risk for the Capacity Expansion Problem</title><author>Lee, Chia-Yen ; Chen, Yen-Wen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c292t-e394cfc9eeff606e5c9026a6aec83c29cf7ad43fabc7bcdfacfa5f6cc0a3a9303</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Capacity expansion</topic><topic>Capacity planning</topic><topic>conditional value-at-risk (CVAR)</topic><topic>Costs</topic><topic>Data analysis</topic><topic>Data envelopment analysis</topic><topic>data envelopment analysis (DEA)</topic><topic>Effectiveness</topic><topic>efficiency and effectiveness measure</topic><topic>Empirical analysis</topic><topic>Evaluation</topic><topic>Indexes</topic><topic>Machine learning</topic><topic>Optimization</topic><topic>Power generation</topic><topic>Productivity</topic><topic>reinforcement learning (RL)</topic><topic>Resource allocation</topic><topic>Risk aversion</topic><topic>risk-averse decision</topic><topic>Uncertainty</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lee, Chia-Yen</creatorcontrib><creatorcontrib>Chen, Yen-Wen</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on engineering management</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lee, Chia-Yen</au><au>Chen, Yen-Wen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Reinforcement Learning With Data Envelopment Analysis and Conditional Value-At-Risk for the Capacity Expansion Problem</atitle><jtitle>IEEE transactions on engineering management</jtitle><stitle>TEM</stitle><date>2024</date><risdate>2024</risdate><volume>71</volume><spage>1</spage><epage>12</epage><pages>1-12</pages><issn>0018-9391</issn><eissn>1558-0040</eissn><coden>IEEMA4</coden><abstract>The capacity expansion problem is solved by accurately measuring the existing demand-supply mismatch and controlling the emissions output, considering multiple objectives, specific constraints, resource diversity, and resource allocation. This article proposes a reinforcement learning (RL) framework embedded with data envelopment analysis (DEA) to generate the optimal policy and guide the productivity improvement. The proposed framework uses DEA to evaluate efficiency and effectiveness for reward estimation in RL, and also assesses conditional value-at-risk to characterize the risk-averse capacity decision. Instead of focusing on short-term fluctuations in demand, RL optimizes the expected future reward with sequential capacity decisions over time. An empirical study of U.S. power generation validates the proposed framework and provides the managerial implications to policy makers. The results show that the RL agent can successfully learn the optimal policy through observing the interactions between the agent and the environment, and suggest the capacity adjustment that can improve efficiency by 8.3% and effectiveness by 0.9%. We conclude that RL complements productivity analysis, and emphasizes ex-ante planning over ex-post evaluation.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TEM.2023.3264566</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-2928-3337</orcidid><orcidid>https://orcid.org/0009-0002-6789-5331</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0018-9391 |
ispartof | IEEE transactions on engineering management, 2024, Vol.71, p.1-12 |
issn | 0018-9391 1558-0040 |
language | eng |
recordid | cdi_ieee_primary_10104118 |
source | IEEE Electronic Library (IEL) |
subjects | Capacity expansion Capacity planning conditional value-at-risk (CVAR) Costs Data analysis Data envelopment analysis data envelopment analysis (DEA) Effectiveness efficiency and effectiveness measure Empirical analysis Evaluation Indexes Machine learning Optimization Power generation Productivity reinforcement learning (RL) Resource allocation Risk aversion risk-averse decision Uncertainty |
title | Reinforcement Learning With Data Envelopment Analysis and Conditional Value-At-Risk for the Capacity Expansion Problem |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T16%3A45%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Reinforcement%20Learning%20With%20Data%20Envelopment%20Analysis%20and%20Conditional%20Value-At-Risk%20for%20the%20Capacity%20Expansion%20Problem&rft.jtitle=IEEE%20transactions%20on%20engineering%20management&rft.au=Lee,%20Chia-Yen&rft.date=2024&rft.volume=71&rft.spage=1&rft.epage=12&rft.pages=1-12&rft.issn=0018-9391&rft.eissn=1558-0040&rft.coden=IEEMA4&rft_id=info:doi/10.1109/TEM.2023.3264566&rft_dat=%3Cproquest_RIE%3E3033620858%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3033620858&rft_id=info:pmid/&rft_ieee_id=10104118&rfr_iscdi=true |