Low-latency job scheduling with preemption for the development of deep learning
One significant challenge in the job scheduling of computing clusters for the development of deep learning algorithms is the efficient scheduling of trial-and-error (TE) job, the type of job in which the users seek to conduct small-scale experiments while monitoring their processes. Unfortunately, t...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Yabuuchi, Hidehito Taniwaki, Daisuke Omura, Shingo |
description | One significant challenge in the job scheduling of computing clusters for the
development of deep learning algorithms is the efficient scheduling of
trial-and-error (TE) job, the type of job in which the users seek to conduct
small-scale experiments while monitoring their processes. Unfortunately, the
existing job schedulers to date do not feature well-balanced scheduling for the
mixture of TE jobs and best-effort (BE) jobs, or they can handle the mixture in
limited situations at most. To fill in this niche, we propose an algorithm that
can significantly reduce the latency of TE jobs in versatile situations without
greatly elongating the slowdown of the BE jobs. Our algorithm efficiently
schedules both TE and BE jobs by selectively preempting the BE jobs that can
be, when the time comes, resumed without much delay. In our simulation study
with synthetic and real workloads, we were able to reduce the 95th percentile
of the slowdown rates for the TE jobs in the standard FIFO strategy by 96.6%,
while compromising the median of the BE slowdown rates by only 18.0% and the
95th percentile by only 23.9%. |
doi_str_mv | 10.48550/arxiv.1902.01613 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1902_01613</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1902_01613</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-70f61f8f2398b6f046bf86e1ea1ba50ad56e27b10925e286e738276abb4761833</originalsourceid><addsrcrecordid>eNotj8tqwzAURLXpoqT9gK6iH7Crh_XwsoS-wJBN9kaKr2IFWRKKmzR_XzftahiGM3AQeqKkbrQQ5NmUb3-uaUtYTaik_B5tu3Spgpkh7q_4mCw-7UcYvoKPB3zx84hzAZjy7FPELhU8j4AHOENIeYI44-SWChkHMCUu0AO6cyac4PE_V2j39rrbfFTd9v1z89JVRipeKeIkddox3morHWmkdVoCBUOtEcQMQgJTlpKWCWDLorhmShprGyWp5nyF1n-3N6M-Fz-Zcu1_zfqbGf8BLARJHQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Low-latency job scheduling with preemption for the development of deep learning</title><source>arXiv.org</source><creator>Yabuuchi, Hidehito ; Taniwaki, Daisuke ; Omura, Shingo</creator><creatorcontrib>Yabuuchi, Hidehito ; Taniwaki, Daisuke ; Omura, Shingo</creatorcontrib><description>One significant challenge in the job scheduling of computing clusters for the
development of deep learning algorithms is the efficient scheduling of
trial-and-error (TE) job, the type of job in which the users seek to conduct
small-scale experiments while monitoring their processes. Unfortunately, the
existing job schedulers to date do not feature well-balanced scheduling for the
mixture of TE jobs and best-effort (BE) jobs, or they can handle the mixture in
limited situations at most. To fill in this niche, we propose an algorithm that
can significantly reduce the latency of TE jobs in versatile situations without
greatly elongating the slowdown of the BE jobs. Our algorithm efficiently
schedules both TE and BE jobs by selectively preempting the BE jobs that can
be, when the time comes, resumed without much delay. In our simulation study
with synthetic and real workloads, we were able to reduce the 95th percentile
of the slowdown rates for the TE jobs in the standard FIFO strategy by 96.6%,
while compromising the median of the BE slowdown rates by only 18.0% and the
95th percentile by only 23.9%.</description><identifier>DOI: 10.48550/arxiv.1902.01613</identifier><language>eng</language><subject>Computer Science - Distributed, Parallel, and Cluster Computing ; Computer Science - Learning</subject><creationdate>2019-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,883</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1902.01613$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1902.01613$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yabuuchi, Hidehito</creatorcontrib><creatorcontrib>Taniwaki, Daisuke</creatorcontrib><creatorcontrib>Omura, Shingo</creatorcontrib><title>Low-latency job scheduling with preemption for the development of deep learning</title><description>One significant challenge in the job scheduling of computing clusters for the
development of deep learning algorithms is the efficient scheduling of
trial-and-error (TE) job, the type of job in which the users seek to conduct
small-scale experiments while monitoring their processes. Unfortunately, the
existing job schedulers to date do not feature well-balanced scheduling for the
mixture of TE jobs and best-effort (BE) jobs, or they can handle the mixture in
limited situations at most. To fill in this niche, we propose an algorithm that
can significantly reduce the latency of TE jobs in versatile situations without
greatly elongating the slowdown of the BE jobs. Our algorithm efficiently
schedules both TE and BE jobs by selectively preempting the BE jobs that can
be, when the time comes, resumed without much delay. In our simulation study
with synthetic and real workloads, we were able to reduce the 95th percentile
of the slowdown rates for the TE jobs in the standard FIFO strategy by 96.6%,
while compromising the median of the BE slowdown rates by only 18.0% and the
95th percentile by only 23.9%.</description><subject>Computer Science - Distributed, Parallel, and Cluster Computing</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tqwzAURLXpoqT9gK6iH7Crh_XwsoS-wJBN9kaKr2IFWRKKmzR_XzftahiGM3AQeqKkbrQQ5NmUb3-uaUtYTaik_B5tu3Spgpkh7q_4mCw-7UcYvoKPB3zx84hzAZjy7FPELhU8j4AHOENIeYI44-SWChkHMCUu0AO6cyac4PE_V2j39rrbfFTd9v1z89JVRipeKeIkddox3morHWmkdVoCBUOtEcQMQgJTlpKWCWDLorhmShprGyWp5nyF1n-3N6M-Fz-Zcu1_zfqbGf8BLARJHQ</recordid><startdate>20190205</startdate><enddate>20190205</enddate><creator>Yabuuchi, Hidehito</creator><creator>Taniwaki, Daisuke</creator><creator>Omura, Shingo</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190205</creationdate><title>Low-latency job scheduling with preemption for the development of deep learning</title><author>Yabuuchi, Hidehito ; Taniwaki, Daisuke ; Omura, Shingo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-70f61f8f2398b6f046bf86e1ea1ba50ad56e27b10925e286e738276abb4761833</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Distributed, Parallel, and Cluster Computing</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Yabuuchi, Hidehito</creatorcontrib><creatorcontrib>Taniwaki, Daisuke</creatorcontrib><creatorcontrib>Omura, Shingo</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yabuuchi, Hidehito</au><au>Taniwaki, Daisuke</au><au>Omura, Shingo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Low-latency job scheduling with preemption for the development of deep learning</atitle><date>2019-02-05</date><risdate>2019</risdate><abstract>One significant challenge in the job scheduling of computing clusters for the
development of deep learning algorithms is the efficient scheduling of
trial-and-error (TE) job, the type of job in which the users seek to conduct
small-scale experiments while monitoring their processes. Unfortunately, the
existing job schedulers to date do not feature well-balanced scheduling for the
mixture of TE jobs and best-effort (BE) jobs, or they can handle the mixture in
limited situations at most. To fill in this niche, we propose an algorithm that
can significantly reduce the latency of TE jobs in versatile situations without
greatly elongating the slowdown of the BE jobs. Our algorithm efficiently
schedules both TE and BE jobs by selectively preempting the BE jobs that can
be, when the time comes, resumed without much delay. In our simulation study
with synthetic and real workloads, we were able to reduce the 95th percentile
of the slowdown rates for the TE jobs in the standard FIFO strategy by 96.6%,
while compromising the median of the BE slowdown rates by only 18.0% and the
95th percentile by only 23.9%.</abstract><doi>10.48550/arxiv.1902.01613</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1902.01613 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1902_01613 |
source | arXiv.org |
subjects | Computer Science - Distributed, Parallel, and Cluster Computing Computer Science - Learning |
title | Low-latency job scheduling with preemption for the development of deep learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T11%3A24%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Low-latency%20job%20scheduling%20with%20preemption%20for%20the%20development%20of%20deep%20learning&rft.au=Yabuuchi,%20Hidehito&rft.date=2019-02-05&rft_id=info:doi/10.48550/arxiv.1902.01613&rft_dat=%3Carxiv_GOX%3E1902_01613%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |