Resource Allocation for Cloud-Based Software Services Using Prediction-Enabled Feedback Control With Reinforcement Learning
With time-varying workloads and service requests, cloud-based software services necessitate adaptive resource allocation for guaranteeing Quality-of-Service (QoS) and reducing resource costs. However, due to the ever-changing system states, resource allocation for cloud-based software services faces...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on cloud computing 2022-04, Vol.10 (2), p.1117-1129 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1129 |
---|---|
container_issue | 2 |
container_start_page | 1117 |
container_title | IEEE transactions on cloud computing |
container_volume | 10 |
creator | Chen, Xing Zhu, Fangning Chen, Zheyi Min, Geyong Zheng, Xianghan Rong, Chunming |
description | With time-varying workloads and service requests, cloud-based software services necessitate adaptive resource allocation for guaranteeing Quality-of-Service (QoS) and reducing resource costs. However, due to the ever-changing system states, resource allocation for cloud-based software services faces huge challenges in dynamics and complexity. The traditional approaches mostly rely on expert knowledge or numerous iterations, which might lead to weak adaptiveness and extra costs. Moreover, existing RL-based methods target the environment with the fixed workload, and thus they are unable to effectively fit in the real-world scenarios with variable workloads. To address these important challenges, we propose a Prediction-enabled feedback Control with Reinforcement learning based resource Allocation (PCRA) method. First, a novel Q-value prediction model is designed to predict the values of management operations (by Q-values) at different system states. The model uses multiple prediction learners for making accurate Q-value prediction by integrating the Q-learning algorithm. Next, the objective resource allocation plans can be found by using a new feedback-control based decision-making algorithm. Using the RUBiS benchmark, simulation results demonstrate that the PCRA chooses the management operations of resource allocation with 93.7 percent correctness. Moreover, the PCRA achieves optimal/near-optimal performance, and it outperforms the classic ML-based and rule-based methods by 5\sim ∼ 7% and 10\sim ∼ 13%, respectively. |
doi_str_mv | 10.1109/TCC.2020.2992537 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9086132</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9086132</ieee_id><sourcerecordid>2674084169</sourcerecordid><originalsourceid>FETCH-LOGICAL-c221t-25827145fe695c1b247a880c4a73e42f22ee2908fae994c10d551fb1f0e7557a3</originalsourceid><addsrcrecordid>eNpNkM1Lw0AQxRdRsGjvgpcFz6k7m2ySPdbQqlBQ-oHHsNnMamqarbupIv7zbmkR5zJz-L03M4-QK2AjACZvl0Ux4oyzEZeSizg7IQMOaR5lkMLpv_mcDL1fs1C5AAlyQH7m6O3OaaTjtrVa9Y3tqLGOFq3d1dGd8ljThTX9l3JIF-g-G42ernzTvdJnh3Wj95Jo0qmqDegUsa6UfqeF7XpnW_rS9G90jk0XTDVusOvpDJXrgv6SnBnVehwe-wVZTSfL4iGaPd0_FuNZpDmHPuIi5xkkwmAqhYaKJ5nKc6YTlcWYcMM5IpcsNwqlTDSwWggwFRiGmRCZii_IzcF36-zHDn1frsPLXVhZ8jRLWJ5AKgPFDpR21nuHpty6ZqPcdwms3KdchpTLfcrlMeUguT5IGkT8w8MpKcQ8_gUQVHi7</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2674084169</pqid></control><display><type>article</type><title>Resource Allocation for Cloud-Based Software Services Using Prediction-Enabled Feedback Control With Reinforcement Learning</title><source>IEEE Electronic Library (IEL)</source><creator>Chen, Xing ; Zhu, Fangning ; Chen, Zheyi ; Min, Geyong ; Zheng, Xianghan ; Rong, Chunming</creator><creatorcontrib>Chen, Xing ; Zhu, Fangning ; Chen, Zheyi ; Min, Geyong ; Zheng, Xianghan ; Rong, Chunming</creatorcontrib><description><![CDATA[With time-varying workloads and service requests, cloud-based software services necessitate adaptive resource allocation for guaranteeing Quality-of-Service (QoS) and reducing resource costs. However, due to the ever-changing system states, resource allocation for cloud-based software services faces huge challenges in dynamics and complexity. The traditional approaches mostly rely on expert knowledge or numerous iterations, which might lead to weak adaptiveness and extra costs. Moreover, existing RL-based methods target the environment with the fixed workload, and thus they are unable to effectively fit in the real-world scenarios with variable workloads. To address these important challenges, we propose a Prediction-enabled feedback Control with Reinforcement learning based resource Allocation (PCRA) method. First, a novel Q-value prediction model is designed to predict the values of management operations (by Q-values) at different system states. The model uses multiple prediction learners for making accurate Q-value prediction by integrating the Q-learning algorithm. Next, the objective resource allocation plans can be found by using a new feedback-control based decision-making algorithm. Using the RUBiS benchmark, simulation results demonstrate that the PCRA chooses the management operations of resource allocation with 93.7 percent correctness. Moreover, the PCRA achieves optimal/near-optimal performance, and it outperforms the classic ML-based and rule-based methods by 5<inline-formula><tex-math notation="LaTeX">\sim</tex-math> <mml:math><mml:mo>∼</mml:mo></mml:math><inline-graphic xlink:href="chen-ieq1-2992537.gif"/> </inline-formula>7% and 10<inline-formula><tex-math notation="LaTeX">\sim</tex-math> <mml:math><mml:mo>∼</mml:mo></mml:math><inline-graphic xlink:href="chen-ieq2-2992537.gif"/> </inline-formula>13%, respectively.]]></description><identifier>ISSN: 2168-7161</identifier><identifier>EISSN: 2168-7161</identifier><identifier>EISSN: 2372-0018</identifier><identifier>DOI: 10.1109/TCC.2020.2992537</identifier><identifier>CODEN: ITCCF6</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Cloud computing ; Cloud-based software services ; Control systems ; Decision making ; Feedback control ; Machine learning ; Prediction algorithms ; Prediction models ; Predictive models ; Q values ; Q-value prediction ; Quality of service ; reinforcement learning ; Resource allocation ; Resource management ; Software ; Software services ; Workload ; Workloads</subject><ispartof>IEEE transactions on cloud computing, 2022-04, Vol.10 (2), p.1117-1129</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c221t-25827145fe695c1b247a880c4a73e42f22ee2908fae994c10d551fb1f0e7557a3</citedby><cites>FETCH-LOGICAL-c221t-25827145fe695c1b247a880c4a73e42f22ee2908fae994c10d551fb1f0e7557a3</cites><orcidid>0000-0001-9641-3528 ; 0000-0002-8347-0539 ; 0000-0002-6349-068X ; 0000-0003-1395-7314</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9086132$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9086132$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Chen, Xing</creatorcontrib><creatorcontrib>Zhu, Fangning</creatorcontrib><creatorcontrib>Chen, Zheyi</creatorcontrib><creatorcontrib>Min, Geyong</creatorcontrib><creatorcontrib>Zheng, Xianghan</creatorcontrib><creatorcontrib>Rong, Chunming</creatorcontrib><title>Resource Allocation for Cloud-Based Software Services Using Prediction-Enabled Feedback Control With Reinforcement Learning</title><title>IEEE transactions on cloud computing</title><addtitle>TCC</addtitle><description><![CDATA[With time-varying workloads and service requests, cloud-based software services necessitate adaptive resource allocation for guaranteeing Quality-of-Service (QoS) and reducing resource costs. However, due to the ever-changing system states, resource allocation for cloud-based software services faces huge challenges in dynamics and complexity. The traditional approaches mostly rely on expert knowledge or numerous iterations, which might lead to weak adaptiveness and extra costs. Moreover, existing RL-based methods target the environment with the fixed workload, and thus they are unable to effectively fit in the real-world scenarios with variable workloads. To address these important challenges, we propose a Prediction-enabled feedback Control with Reinforcement learning based resource Allocation (PCRA) method. First, a novel Q-value prediction model is designed to predict the values of management operations (by Q-values) at different system states. The model uses multiple prediction learners for making accurate Q-value prediction by integrating the Q-learning algorithm. Next, the objective resource allocation plans can be found by using a new feedback-control based decision-making algorithm. Using the RUBiS benchmark, simulation results demonstrate that the PCRA chooses the management operations of resource allocation with 93.7 percent correctness. Moreover, the PCRA achieves optimal/near-optimal performance, and it outperforms the classic ML-based and rule-based methods by 5<inline-formula><tex-math notation="LaTeX">\sim</tex-math> <mml:math><mml:mo>∼</mml:mo></mml:math><inline-graphic xlink:href="chen-ieq1-2992537.gif"/> </inline-formula>7% and 10<inline-formula><tex-math notation="LaTeX">\sim</tex-math> <mml:math><mml:mo>∼</mml:mo></mml:math><inline-graphic xlink:href="chen-ieq2-2992537.gif"/> </inline-formula>13%, respectively.]]></description><subject>Algorithms</subject><subject>Cloud computing</subject><subject>Cloud-based software services</subject><subject>Control systems</subject><subject>Decision making</subject><subject>Feedback control</subject><subject>Machine learning</subject><subject>Prediction algorithms</subject><subject>Prediction models</subject><subject>Predictive models</subject><subject>Q values</subject><subject>Q-value prediction</subject><subject>Quality of service</subject><subject>reinforcement learning</subject><subject>Resource allocation</subject><subject>Resource management</subject><subject>Software</subject><subject>Software services</subject><subject>Workload</subject><subject>Workloads</subject><issn>2168-7161</issn><issn>2168-7161</issn><issn>2372-0018</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkM1Lw0AQxRdRsGjvgpcFz6k7m2ySPdbQqlBQ-oHHsNnMamqarbupIv7zbmkR5zJz-L03M4-QK2AjACZvl0Ux4oyzEZeSizg7IQMOaR5lkMLpv_mcDL1fs1C5AAlyQH7m6O3OaaTjtrVa9Y3tqLGOFq3d1dGd8ljThTX9l3JIF-g-G42ernzTvdJnh3Wj95Jo0qmqDegUsa6UfqeF7XpnW_rS9G90jk0XTDVusOvpDJXrgv6SnBnVehwe-wVZTSfL4iGaPd0_FuNZpDmHPuIi5xkkwmAqhYaKJ5nKc6YTlcWYcMM5IpcsNwqlTDSwWggwFRiGmRCZii_IzcF36-zHDn1frsPLXVhZ8jRLWJ5AKgPFDpR21nuHpty6ZqPcdwms3KdchpTLfcrlMeUguT5IGkT8w8MpKcQ8_gUQVHi7</recordid><startdate>20220401</startdate><enddate>20220401</enddate><creator>Chen, Xing</creator><creator>Zhu, Fangning</creator><creator>Chen, Zheyi</creator><creator>Min, Geyong</creator><creator>Zheng, Xianghan</creator><creator>Rong, Chunming</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-9641-3528</orcidid><orcidid>https://orcid.org/0000-0002-8347-0539</orcidid><orcidid>https://orcid.org/0000-0002-6349-068X</orcidid><orcidid>https://orcid.org/0000-0003-1395-7314</orcidid></search><sort><creationdate>20220401</creationdate><title>Resource Allocation for Cloud-Based Software Services Using Prediction-Enabled Feedback Control With Reinforcement Learning</title><author>Chen, Xing ; Zhu, Fangning ; Chen, Zheyi ; Min, Geyong ; Zheng, Xianghan ; Rong, Chunming</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c221t-25827145fe695c1b247a880c4a73e42f22ee2908fae994c10d551fb1f0e7557a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Cloud computing</topic><topic>Cloud-based software services</topic><topic>Control systems</topic><topic>Decision making</topic><topic>Feedback control</topic><topic>Machine learning</topic><topic>Prediction algorithms</topic><topic>Prediction models</topic><topic>Predictive models</topic><topic>Q values</topic><topic>Q-value prediction</topic><topic>Quality of service</topic><topic>reinforcement learning</topic><topic>Resource allocation</topic><topic>Resource management</topic><topic>Software</topic><topic>Software services</topic><topic>Workload</topic><topic>Workloads</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chen, Xing</creatorcontrib><creatorcontrib>Zhu, Fangning</creatorcontrib><creatorcontrib>Chen, Zheyi</creatorcontrib><creatorcontrib>Min, Geyong</creatorcontrib><creatorcontrib>Zheng, Xianghan</creatorcontrib><creatorcontrib>Rong, Chunming</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on cloud computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Xing</au><au>Zhu, Fangning</au><au>Chen, Zheyi</au><au>Min, Geyong</au><au>Zheng, Xianghan</au><au>Rong, Chunming</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Resource Allocation for Cloud-Based Software Services Using Prediction-Enabled Feedback Control With Reinforcement Learning</atitle><jtitle>IEEE transactions on cloud computing</jtitle><stitle>TCC</stitle><date>2022-04-01</date><risdate>2022</risdate><volume>10</volume><issue>2</issue><spage>1117</spage><epage>1129</epage><pages>1117-1129</pages><issn>2168-7161</issn><eissn>2168-7161</eissn><eissn>2372-0018</eissn><coden>ITCCF6</coden><abstract><![CDATA[With time-varying workloads and service requests, cloud-based software services necessitate adaptive resource allocation for guaranteeing Quality-of-Service (QoS) and reducing resource costs. However, due to the ever-changing system states, resource allocation for cloud-based software services faces huge challenges in dynamics and complexity. The traditional approaches mostly rely on expert knowledge or numerous iterations, which might lead to weak adaptiveness and extra costs. Moreover, existing RL-based methods target the environment with the fixed workload, and thus they are unable to effectively fit in the real-world scenarios with variable workloads. To address these important challenges, we propose a Prediction-enabled feedback Control with Reinforcement learning based resource Allocation (PCRA) method. First, a novel Q-value prediction model is designed to predict the values of management operations (by Q-values) at different system states. The model uses multiple prediction learners for making accurate Q-value prediction by integrating the Q-learning algorithm. Next, the objective resource allocation plans can be found by using a new feedback-control based decision-making algorithm. Using the RUBiS benchmark, simulation results demonstrate that the PCRA chooses the management operations of resource allocation with 93.7 percent correctness. Moreover, the PCRA achieves optimal/near-optimal performance, and it outperforms the classic ML-based and rule-based methods by 5<inline-formula><tex-math notation="LaTeX">\sim</tex-math> <mml:math><mml:mo>∼</mml:mo></mml:math><inline-graphic xlink:href="chen-ieq1-2992537.gif"/> </inline-formula>7% and 10<inline-formula><tex-math notation="LaTeX">\sim</tex-math> <mml:math><mml:mo>∼</mml:mo></mml:math><inline-graphic xlink:href="chen-ieq2-2992537.gif"/> </inline-formula>13%, respectively.]]></abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TCC.2020.2992537</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0001-9641-3528</orcidid><orcidid>https://orcid.org/0000-0002-8347-0539</orcidid><orcidid>https://orcid.org/0000-0002-6349-068X</orcidid><orcidid>https://orcid.org/0000-0003-1395-7314</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2168-7161 |
ispartof | IEEE transactions on cloud computing, 2022-04, Vol.10 (2), p.1117-1129 |
issn | 2168-7161 2168-7161 2372-0018 |
language | eng |
recordid | cdi_ieee_primary_9086132 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Cloud computing Cloud-based software services Control systems Decision making Feedback control Machine learning Prediction algorithms Prediction models Predictive models Q values Q-value prediction Quality of service reinforcement learning Resource allocation Resource management Software Software services Workload Workloads |
title | Resource Allocation for Cloud-Based Software Services Using Prediction-Enabled Feedback Control With Reinforcement Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T21%3A08%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Resource%20Allocation%20for%20Cloud-Based%20Software%20Services%20Using%20Prediction-Enabled%20Feedback%20Control%20With%20Reinforcement%20Learning&rft.jtitle=IEEE%20transactions%20on%20cloud%20computing&rft.au=Chen,%20Xing&rft.date=2022-04-01&rft.volume=10&rft.issue=2&rft.spage=1117&rft.epage=1129&rft.pages=1117-1129&rft.issn=2168-7161&rft.eissn=2168-7161&rft.coden=ITCCF6&rft_id=info:doi/10.1109/TCC.2020.2992537&rft_dat=%3Cproquest_RIE%3E2674084169%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2674084169&rft_id=info:pmid/&rft_ieee_id=9086132&rfr_iscdi=true |