Deep Reinforcement Learning Enhanced Greedy Optimization for Online Scheduling of Batched Tasks in Cloud HPC Systems

In a large cloud data center HPC system, a critical problem is how to allocate the submitted tasks to heterogeneous servers that will achieve the goal of maximizing the system's gain defined as the value of completed tasks minus system operation costs. We consider this problem in the online set...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on parallel and distributed systems 2022-11, Vol.33 (11), p.3003-3014
Hauptverfasser: Yang, Yuanhao, Shen, Hong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 3014
container_issue 11
container_start_page 3003
container_title IEEE transactions on parallel and distributed systems
container_volume 33
creator Yang, Yuanhao
Shen, Hong
description In a large cloud data center HPC system, a critical problem is how to allocate the submitted tasks to heterogeneous servers that will achieve the goal of maximizing the system's gain defined as the value of completed tasks minus system operation costs. We consider this problem in the online setting that tasks arrive in batches and propose a novel deep reinforcement learning (DRL) enhanced greedy optimization algorithm of two-stage scheduling interacting task sequencing and task allocation. For task sequencing, we deploy a DRL module to predict the best allocation sequence for each arriving batch of tasks based on the knowledge (allocation strategies) learnt from previous batches. For task allocation, we propose a greedy strategy that allocates tasks to servers one by one online following the allocation sequence to maximize the total gain increase. We show that our greedy strategy has a performance guarantee of competitive ratio \frac{1}{1+\kappa } 11+κ to the optimal offline solution, which improves the existing result for the same problem, where \kappa κ is upper bounded by the maximum cost-to-gain ratio of each task. While our DRL module enhances the greedy algorithm by providing the likely-optimal allocation sequence for each batch of arriving tasks, our greedy strategy bounds DRL's prediction error within a proven worst-case performance guarantee for any allocation sequence. It enables a better solution quality than that obtainable from both DRL and greedy optimization alone. Extensive experiment evaluation results in both simulation and real application environments demonstrate the effectiveness and efficiency of our proposed algorithm. Compared with the state-of-the-art baselines, our algorithm increases the system gain by about 10% to 30%. Our algorithm provides an interesting example of combining machine learning (ML) and greedy optimization techniques to improve ML-based solutions with a worst-case performance guarantee for solving hard optimization problems.
doi_str_mv 10.1109/TPDS.2021.3138459
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9664254</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9664254</ieee_id><sourcerecordid>2669160756</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-98cb6e214a95ba3a2abe0c0ea1e3c23099b09bffdf8408f70e89dc1ff216ba4f3</originalsourceid><addsrcrecordid>eNotjU1PwkAYhDdGExH9AcbLJp6L-033qIhgQgIRPDfb9l1ZbLe1uz3gr7cETzOTPDOD0D0lE0qJftptXrcTRhidcMpTIfUFGlEp04TRlF8OngiZaEb1NboJ4UAIFZKIEYqvAC3-AOdt0xVQg494Babzzn_hud8bX0CJFx1AecTrNrra_ZroGo8HHq995TzgbbGHsq9OlcbiFxNPGe9M-A7YeTyrmr7Ey80Mb48hQh1u0ZU1VYC7fx2jz7f5brZMVuvF--x5lThGeEx0WuQKGBVGy9xww0wOpCBgKPCCcaJ1TnRubWlTQVI7JZDqsqDWMqpyIywfo8fzbts1Pz2EmB2avvPDZcaU0lSRqVQD9XCmHABkbedq0x0zrZRgUvA_zWtnyw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2669160756</pqid></control><display><type>article</type><title>Deep Reinforcement Learning Enhanced Greedy Optimization for Online Scheduling of Batched Tasks in Cloud HPC Systems</title><source>IEEE Electronic Library (IEL)</source><creator>Yang, Yuanhao ; Shen, Hong</creator><creatorcontrib>Yang, Yuanhao ; Shen, Hong</creatorcontrib><description><![CDATA[In a large cloud data center HPC system, a critical problem is how to allocate the submitted tasks to heterogeneous servers that will achieve the goal of maximizing the system's gain defined as the value of completed tasks minus system operation costs. We consider this problem in the online setting that tasks arrive in batches and propose a novel deep reinforcement learning (DRL) enhanced greedy optimization algorithm of two-stage scheduling interacting task sequencing and task allocation. For task sequencing, we deploy a DRL module to predict the best allocation sequence for each arriving batch of tasks based on the knowledge (allocation strategies) learnt from previous batches. For task allocation, we propose a greedy strategy that allocates tasks to servers one by one online following the allocation sequence to maximize the total gain increase. We show that our greedy strategy has a performance guarantee of competitive ratio <inline-formula><tex-math notation="LaTeX">\frac{1}{1+\kappa }</tex-math> <mml:math><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:mi>κ</mml:mi></mml:mrow></mml:mfrac></mml:math><inline-graphic xlink:href="shen-ieq1-3138459.gif"/> </inline-formula> to the optimal offline solution, which improves the existing result for the same problem, where <inline-formula><tex-math notation="LaTeX">\kappa</tex-math> <mml:math><mml:mi>κ</mml:mi></mml:math><inline-graphic xlink:href="shen-ieq2-3138459.gif"/> </inline-formula> is upper bounded by the maximum cost-to-gain ratio of each task. While our DRL module enhances the greedy algorithm by providing the likely-optimal allocation sequence for each batch of arriving tasks, our greedy strategy bounds DRL's prediction error within a proven worst-case performance guarantee for any allocation sequence. It enables a better solution quality than that obtainable from both DRL and greedy optimization alone. Extensive experiment evaluation results in both simulation and real application environments demonstrate the effectiveness and efficiency of our proposed algorithm. Compared with the state-of-the-art baselines, our algorithm increases the system gain by about 10% to 30%. Our algorithm provides an interesting example of combining machine learning (ML) and greedy optimization techniques to improve ML-based solutions with a worst-case performance guarantee for solving hard optimization problems.]]></description><identifier>ISSN: 1045-9219</identifier><identifier>EISSN: 1558-2183</identifier><identifier>DOI: 10.1109/TPDS.2021.3138459</identifier><identifier>CODEN: ITDSEO</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; approximation algorithm ; Approximation algorithms ; Business competition ; Cloud computing ; Computer aided scheduling ; Costs ; Data centers ; Deep learning ; deep reinforcement learning ; Greedy algorithms ; greedy optimization ; Machine learning ; Modules ; Optimization ; Optimization techniques ; Processor scheduling ; Resource management ; Servers ; Strategy ; Task analysis ; Task scheduling</subject><ispartof>IEEE transactions on parallel and distributed systems, 2022-11, Vol.33 (11), p.3003-3014</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-3663-6591</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9664254$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9664254$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Yang, Yuanhao</creatorcontrib><creatorcontrib>Shen, Hong</creatorcontrib><title>Deep Reinforcement Learning Enhanced Greedy Optimization for Online Scheduling of Batched Tasks in Cloud HPC Systems</title><title>IEEE transactions on parallel and distributed systems</title><addtitle>TPDS</addtitle><description><![CDATA[In a large cloud data center HPC system, a critical problem is how to allocate the submitted tasks to heterogeneous servers that will achieve the goal of maximizing the system's gain defined as the value of completed tasks minus system operation costs. We consider this problem in the online setting that tasks arrive in batches and propose a novel deep reinforcement learning (DRL) enhanced greedy optimization algorithm of two-stage scheduling interacting task sequencing and task allocation. For task sequencing, we deploy a DRL module to predict the best allocation sequence for each arriving batch of tasks based on the knowledge (allocation strategies) learnt from previous batches. For task allocation, we propose a greedy strategy that allocates tasks to servers one by one online following the allocation sequence to maximize the total gain increase. We show that our greedy strategy has a performance guarantee of competitive ratio <inline-formula><tex-math notation="LaTeX">\frac{1}{1+\kappa }</tex-math> <mml:math><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:mi>κ</mml:mi></mml:mrow></mml:mfrac></mml:math><inline-graphic xlink:href="shen-ieq1-3138459.gif"/> </inline-formula> to the optimal offline solution, which improves the existing result for the same problem, where <inline-formula><tex-math notation="LaTeX">\kappa</tex-math> <mml:math><mml:mi>κ</mml:mi></mml:math><inline-graphic xlink:href="shen-ieq2-3138459.gif"/> </inline-formula> is upper bounded by the maximum cost-to-gain ratio of each task. While our DRL module enhances the greedy algorithm by providing the likely-optimal allocation sequence for each batch of arriving tasks, our greedy strategy bounds DRL's prediction error within a proven worst-case performance guarantee for any allocation sequence. It enables a better solution quality than that obtainable from both DRL and greedy optimization alone. Extensive experiment evaluation results in both simulation and real application environments demonstrate the effectiveness and efficiency of our proposed algorithm. Compared with the state-of-the-art baselines, our algorithm increases the system gain by about 10% to 30%. Our algorithm provides an interesting example of combining machine learning (ML) and greedy optimization techniques to improve ML-based solutions with a worst-case performance guarantee for solving hard optimization problems.]]></description><subject>Algorithms</subject><subject>approximation algorithm</subject><subject>Approximation algorithms</subject><subject>Business competition</subject><subject>Cloud computing</subject><subject>Computer aided scheduling</subject><subject>Costs</subject><subject>Data centers</subject><subject>Deep learning</subject><subject>deep reinforcement learning</subject><subject>Greedy algorithms</subject><subject>greedy optimization</subject><subject>Machine learning</subject><subject>Modules</subject><subject>Optimization</subject><subject>Optimization techniques</subject><subject>Processor scheduling</subject><subject>Resource management</subject><subject>Servers</subject><subject>Strategy</subject><subject>Task analysis</subject><subject>Task scheduling</subject><issn>1045-9219</issn><issn>1558-2183</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNotjU1PwkAYhDdGExH9AcbLJp6L-033qIhgQgIRPDfb9l1ZbLe1uz3gr7cETzOTPDOD0D0lE0qJftptXrcTRhidcMpTIfUFGlEp04TRlF8OngiZaEb1NboJ4UAIFZKIEYqvAC3-AOdt0xVQg494Babzzn_hud8bX0CJFx1AecTrNrra_ZroGo8HHq995TzgbbGHsq9OlcbiFxNPGe9M-A7YeTyrmr7Ey80Mb48hQh1u0ZU1VYC7fx2jz7f5brZMVuvF--x5lThGeEx0WuQKGBVGy9xww0wOpCBgKPCCcaJ1TnRubWlTQVI7JZDqsqDWMqpyIywfo8fzbts1Pz2EmB2avvPDZcaU0lSRqVQD9XCmHABkbedq0x0zrZRgUvA_zWtnyw</recordid><startdate>20221101</startdate><enddate>20221101</enddate><creator>Yang, Yuanhao</creator><creator>Shen, Hong</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-3663-6591</orcidid></search><sort><creationdate>20221101</creationdate><title>Deep Reinforcement Learning Enhanced Greedy Optimization for Online Scheduling of Batched Tasks in Cloud HPC Systems</title><author>Yang, Yuanhao ; Shen, Hong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-98cb6e214a95ba3a2abe0c0ea1e3c23099b09bffdf8408f70e89dc1ff216ba4f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>approximation algorithm</topic><topic>Approximation algorithms</topic><topic>Business competition</topic><topic>Cloud computing</topic><topic>Computer aided scheduling</topic><topic>Costs</topic><topic>Data centers</topic><topic>Deep learning</topic><topic>deep reinforcement learning</topic><topic>Greedy algorithms</topic><topic>greedy optimization</topic><topic>Machine learning</topic><topic>Modules</topic><topic>Optimization</topic><topic>Optimization techniques</topic><topic>Processor scheduling</topic><topic>Resource management</topic><topic>Servers</topic><topic>Strategy</topic><topic>Task analysis</topic><topic>Task scheduling</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yang, Yuanhao</creatorcontrib><creatorcontrib>Shen, Hong</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on parallel and distributed systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yang, Yuanhao</au><au>Shen, Hong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Reinforcement Learning Enhanced Greedy Optimization for Online Scheduling of Batched Tasks in Cloud HPC Systems</atitle><jtitle>IEEE transactions on parallel and distributed systems</jtitle><stitle>TPDS</stitle><date>2022-11-01</date><risdate>2022</risdate><volume>33</volume><issue>11</issue><spage>3003</spage><epage>3014</epage><pages>3003-3014</pages><issn>1045-9219</issn><eissn>1558-2183</eissn><coden>ITDSEO</coden><abstract><![CDATA[In a large cloud data center HPC system, a critical problem is how to allocate the submitted tasks to heterogeneous servers that will achieve the goal of maximizing the system's gain defined as the value of completed tasks minus system operation costs. We consider this problem in the online setting that tasks arrive in batches and propose a novel deep reinforcement learning (DRL) enhanced greedy optimization algorithm of two-stage scheduling interacting task sequencing and task allocation. For task sequencing, we deploy a DRL module to predict the best allocation sequence for each arriving batch of tasks based on the knowledge (allocation strategies) learnt from previous batches. For task allocation, we propose a greedy strategy that allocates tasks to servers one by one online following the allocation sequence to maximize the total gain increase. We show that our greedy strategy has a performance guarantee of competitive ratio <inline-formula><tex-math notation="LaTeX">\frac{1}{1+\kappa }</tex-math> <mml:math><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:mi>κ</mml:mi></mml:mrow></mml:mfrac></mml:math><inline-graphic xlink:href="shen-ieq1-3138459.gif"/> </inline-formula> to the optimal offline solution, which improves the existing result for the same problem, where <inline-formula><tex-math notation="LaTeX">\kappa</tex-math> <mml:math><mml:mi>κ</mml:mi></mml:math><inline-graphic xlink:href="shen-ieq2-3138459.gif"/> </inline-formula> is upper bounded by the maximum cost-to-gain ratio of each task. While our DRL module enhances the greedy algorithm by providing the likely-optimal allocation sequence for each batch of arriving tasks, our greedy strategy bounds DRL's prediction error within a proven worst-case performance guarantee for any allocation sequence. It enables a better solution quality than that obtainable from both DRL and greedy optimization alone. Extensive experiment evaluation results in both simulation and real application environments demonstrate the effectiveness and efficiency of our proposed algorithm. Compared with the state-of-the-art baselines, our algorithm increases the system gain by about 10% to 30%. Our algorithm provides an interesting example of combining machine learning (ML) and greedy optimization techniques to improve ML-based solutions with a worst-case performance guarantee for solving hard optimization problems.]]></abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TPDS.2021.3138459</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-3663-6591</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1045-9219
ispartof IEEE transactions on parallel and distributed systems, 2022-11, Vol.33 (11), p.3003-3014
issn 1045-9219
1558-2183
language eng
recordid cdi_ieee_primary_9664254
source IEEE Electronic Library (IEL)
subjects Algorithms
approximation algorithm
Approximation algorithms
Business competition
Cloud computing
Computer aided scheduling
Costs
Data centers
Deep learning
deep reinforcement learning
Greedy algorithms
greedy optimization
Machine learning
Modules
Optimization
Optimization techniques
Processor scheduling
Resource management
Servers
Strategy
Task analysis
Task scheduling
title Deep Reinforcement Learning Enhanced Greedy Optimization for Online Scheduling of Batched Tasks in Cloud HPC Systems
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T20%3A16%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Reinforcement%20Learning%20Enhanced%20Greedy%20Optimization%20for%20Online%20Scheduling%20of%20Batched%20Tasks%20in%20Cloud%20HPC%20Systems&rft.jtitle=IEEE%20transactions%20on%20parallel%20and%20distributed%20systems&rft.au=Yang,%20Yuanhao&rft.date=2022-11-01&rft.volume=33&rft.issue=11&rft.spage=3003&rft.epage=3014&rft.pages=3003-3014&rft.issn=1045-9219&rft.eissn=1558-2183&rft.coden=ITDSEO&rft_id=info:doi/10.1109/TPDS.2021.3138459&rft_dat=%3Cproquest_RIE%3E2669160756%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2669160756&rft_id=info:pmid/&rft_ieee_id=9664254&rfr_iscdi=true