Crossover‐SGD: A gossip‐based communication in distributed deep learning for alleviating large mini‐batch problem and enhancing scalability

Summary Distributed deep learning is an effective way to reduce the training time for large datasets as well as complex models. However, the limited scalability caused by network‐overheads makes it difficult to synchronize the parameters of all workers and gossip‐based methods that demonstrate stabl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Concurrency and computation 2023-07, Vol.35 (15), p.n/a
Hauptverfasser: Yeo, Sangho, Bae, Minho, Jeong, Minjoong, Kwon, Oh‐Kyoung, Oh, Sangyoon
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page n/a
container_issue 15
container_start_page
container_title Concurrency and computation
container_volume 35
creator Yeo, Sangho
Bae, Minho
Jeong, Minjoong
Kwon, Oh‐Kyoung
Oh, Sangyoon
description Summary Distributed deep learning is an effective way to reduce the training time for large datasets as well as complex models. However, the limited scalability caused by network‐overheads makes it difficult to synchronize the parameters of all workers and gossip‐based methods that demonstrate stable scalability regardless of the number of workers have been proposed. However, to use gossip‐based methods in general cases, the validation accuracy for a large mini‐batch needs to be verified. For this, we first empirically study the characteristics of gossip methods in a large mini‐batch problem and observe that gossip methods preserve higher validation accuracy than AllReduce‐SGD (stochastic gradient descent) when the number of batch sizes is increased, and the number of workers is fixed. However, the delayed parameter propagation of the gossip‐based models decreases validation accuracy in large node scales. To cope with this problem, we propose Crossover‐SGD that alleviates the delay propagation of weight parameters via segment‐wise communication and random network topology with fair peer selection. We also adapt hierarchical communication to limit the number of workers in gossip‐based communication methods. To validate the effectiveness of our method, we conduct empirical experiments and observe that our Crossover‐SGD shows higher node scalability than stochastic gradient push.
doi_str_mv 10.1002/cpe.7508
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2827386187</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2827386187</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2548-6d29c91b864038cb183de13a2bccbd84b35f536fb62e1b9502647485fb9576f3</originalsourceid><addsrcrecordid>eNp1kE1OwzAQhSMEEqUgcQRLbNik-CdxXHZVKAWpEkh0H9mO07pynGAnoO44AlyRk-C0iB2reTPz6c3oRdElghMEIb6RrZpkKWRH0QilBMeQkuT4T2N6Gp15v4UQIUjQKPrKXeN986bc98fny-LuFszAOkx0G3rBvSqBbOq6t1ryTjcWaAtK7TunRd-FZalUC4zizmq7BlXjADdGvekAh95wt1ag1lbv3Tq5Aa1rhFE14LYEym64lQPoJTdcaKO73Xl0UnHj1cVvHUer-_kqf4iXT4vHfLaMJU4TFtMST-UUCUYTSJgUiJFSIcKxkFKULBEkrVJCK0GxQmKaQkyTLGFpFXRGKzKOrg624aHXXvmu2Da9s-FigRnOCKOIZYG6PlByiMmpqmidrrnbFQgWQ95FyLsY8g5ofEDftVG7f7kif57v-R96goZX</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2827386187</pqid></control><display><type>article</type><title>Crossover‐SGD: A gossip‐based communication in distributed deep learning for alleviating large mini‐batch problem and enhancing scalability</title><source>Wiley Online Library Journals Frontfile Complete</source><creator>Yeo, Sangho ; Bae, Minho ; Jeong, Minjoong ; Kwon, Oh‐Kyoung ; Oh, Sangyoon</creator><creatorcontrib>Yeo, Sangho ; Bae, Minho ; Jeong, Minjoong ; Kwon, Oh‐Kyoung ; Oh, Sangyoon</creatorcontrib><description>Summary Distributed deep learning is an effective way to reduce the training time for large datasets as well as complex models. However, the limited scalability caused by network‐overheads makes it difficult to synchronize the parameters of all workers and gossip‐based methods that demonstrate stable scalability regardless of the number of workers have been proposed. However, to use gossip‐based methods in general cases, the validation accuracy for a large mini‐batch needs to be verified. For this, we first empirically study the characteristics of gossip methods in a large mini‐batch problem and observe that gossip methods preserve higher validation accuracy than AllReduce‐SGD (stochastic gradient descent) when the number of batch sizes is increased, and the number of workers is fixed. However, the delayed parameter propagation of the gossip‐based models decreases validation accuracy in large node scales. To cope with this problem, we propose Crossover‐SGD that alleviates the delay propagation of weight parameters via segment‐wise communication and random network topology with fair peer selection. We also adapt hierarchical communication to limit the number of workers in gossip‐based communication methods. To validate the effectiveness of our method, we conduct empirical experiments and observe that our Crossover‐SGD shows higher node scalability than stochastic gradient push.</description><identifier>ISSN: 1532-0626</identifier><identifier>EISSN: 1532-0634</identifier><identifier>DOI: 10.1002/cpe.7508</identifier><language>eng</language><publisher>Hoboken: Wiley Subscription Services, Inc</publisher><subject>Accuracy ; Communication ; Deep learning ; distributed deep learning ; Gossip ; gossip based ; hierarchical communication ; large mini‐batch problem ; Mathematical models ; Network topologies ; Parameters ; Propagation ; segment‐wise</subject><ispartof>Concurrency and computation, 2023-07, Vol.35 (15), p.n/a</ispartof><rights>2022 John Wiley &amp; Sons, Ltd.</rights><rights>2023 John Wiley &amp; Sons, Ltd.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c2548-6d29c91b864038cb183de13a2bccbd84b35f536fb62e1b9502647485fb9576f3</cites><orcidid>0000-0001-5854-149X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fcpe.7508$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fcpe.7508$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,776,780,1411,27901,27902,45550,45551</link.rule.ids></links><search><creatorcontrib>Yeo, Sangho</creatorcontrib><creatorcontrib>Bae, Minho</creatorcontrib><creatorcontrib>Jeong, Minjoong</creatorcontrib><creatorcontrib>Kwon, Oh‐Kyoung</creatorcontrib><creatorcontrib>Oh, Sangyoon</creatorcontrib><title>Crossover‐SGD: A gossip‐based communication in distributed deep learning for alleviating large mini‐batch problem and enhancing scalability</title><title>Concurrency and computation</title><description>Summary Distributed deep learning is an effective way to reduce the training time for large datasets as well as complex models. However, the limited scalability caused by network‐overheads makes it difficult to synchronize the parameters of all workers and gossip‐based methods that demonstrate stable scalability regardless of the number of workers have been proposed. However, to use gossip‐based methods in general cases, the validation accuracy for a large mini‐batch needs to be verified. For this, we first empirically study the characteristics of gossip methods in a large mini‐batch problem and observe that gossip methods preserve higher validation accuracy than AllReduce‐SGD (stochastic gradient descent) when the number of batch sizes is increased, and the number of workers is fixed. However, the delayed parameter propagation of the gossip‐based models decreases validation accuracy in large node scales. To cope with this problem, we propose Crossover‐SGD that alleviates the delay propagation of weight parameters via segment‐wise communication and random network topology with fair peer selection. We also adapt hierarchical communication to limit the number of workers in gossip‐based communication methods. To validate the effectiveness of our method, we conduct empirical experiments and observe that our Crossover‐SGD shows higher node scalability than stochastic gradient push.</description><subject>Accuracy</subject><subject>Communication</subject><subject>Deep learning</subject><subject>distributed deep learning</subject><subject>Gossip</subject><subject>gossip based</subject><subject>hierarchical communication</subject><subject>large mini‐batch problem</subject><subject>Mathematical models</subject><subject>Network topologies</subject><subject>Parameters</subject><subject>Propagation</subject><subject>segment‐wise</subject><issn>1532-0626</issn><issn>1532-0634</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp1kE1OwzAQhSMEEqUgcQRLbNik-CdxXHZVKAWpEkh0H9mO07pynGAnoO44AlyRk-C0iB2reTPz6c3oRdElghMEIb6RrZpkKWRH0QilBMeQkuT4T2N6Gp15v4UQIUjQKPrKXeN986bc98fny-LuFszAOkx0G3rBvSqBbOq6t1ryTjcWaAtK7TunRd-FZalUC4zizmq7BlXjADdGvekAh95wt1ag1lbv3Tq5Aa1rhFE14LYEym64lQPoJTdcaKO73Xl0UnHj1cVvHUer-_kqf4iXT4vHfLaMJU4TFtMST-UUCUYTSJgUiJFSIcKxkFKULBEkrVJCK0GxQmKaQkyTLGFpFXRGKzKOrg624aHXXvmu2Da9s-FigRnOCKOIZYG6PlByiMmpqmidrrnbFQgWQ95FyLsY8g5ofEDftVG7f7kif57v-R96goZX</recordid><startdate>20230710</startdate><enddate>20230710</enddate><creator>Yeo, Sangho</creator><creator>Bae, Minho</creator><creator>Jeong, Minjoong</creator><creator>Kwon, Oh‐Kyoung</creator><creator>Oh, Sangyoon</creator><general>Wiley Subscription Services, Inc</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-5854-149X</orcidid></search><sort><creationdate>20230710</creationdate><title>Crossover‐SGD: A gossip‐based communication in distributed deep learning for alleviating large mini‐batch problem and enhancing scalability</title><author>Yeo, Sangho ; Bae, Minho ; Jeong, Minjoong ; Kwon, Oh‐Kyoung ; Oh, Sangyoon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2548-6d29c91b864038cb183de13a2bccbd84b35f536fb62e1b9502647485fb9576f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Communication</topic><topic>Deep learning</topic><topic>distributed deep learning</topic><topic>Gossip</topic><topic>gossip based</topic><topic>hierarchical communication</topic><topic>large mini‐batch problem</topic><topic>Mathematical models</topic><topic>Network topologies</topic><topic>Parameters</topic><topic>Propagation</topic><topic>segment‐wise</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yeo, Sangho</creatorcontrib><creatorcontrib>Bae, Minho</creatorcontrib><creatorcontrib>Jeong, Minjoong</creatorcontrib><creatorcontrib>Kwon, Oh‐Kyoung</creatorcontrib><creatorcontrib>Oh, Sangyoon</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Concurrency and computation</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yeo, Sangho</au><au>Bae, Minho</au><au>Jeong, Minjoong</au><au>Kwon, Oh‐Kyoung</au><au>Oh, Sangyoon</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Crossover‐SGD: A gossip‐based communication in distributed deep learning for alleviating large mini‐batch problem and enhancing scalability</atitle><jtitle>Concurrency and computation</jtitle><date>2023-07-10</date><risdate>2023</risdate><volume>35</volume><issue>15</issue><epage>n/a</epage><issn>1532-0626</issn><eissn>1532-0634</eissn><abstract>Summary Distributed deep learning is an effective way to reduce the training time for large datasets as well as complex models. However, the limited scalability caused by network‐overheads makes it difficult to synchronize the parameters of all workers and gossip‐based methods that demonstrate stable scalability regardless of the number of workers have been proposed. However, to use gossip‐based methods in general cases, the validation accuracy for a large mini‐batch needs to be verified. For this, we first empirically study the characteristics of gossip methods in a large mini‐batch problem and observe that gossip methods preserve higher validation accuracy than AllReduce‐SGD (stochastic gradient descent) when the number of batch sizes is increased, and the number of workers is fixed. However, the delayed parameter propagation of the gossip‐based models decreases validation accuracy in large node scales. To cope with this problem, we propose Crossover‐SGD that alleviates the delay propagation of weight parameters via segment‐wise communication and random network topology with fair peer selection. We also adapt hierarchical communication to limit the number of workers in gossip‐based communication methods. To validate the effectiveness of our method, we conduct empirical experiments and observe that our Crossover‐SGD shows higher node scalability than stochastic gradient push.</abstract><cop>Hoboken</cop><pub>Wiley Subscription Services, Inc</pub><doi>10.1002/cpe.7508</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0001-5854-149X</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1532-0626
ispartof Concurrency and computation, 2023-07, Vol.35 (15), p.n/a
issn 1532-0626
1532-0634
language eng
recordid cdi_proquest_journals_2827386187
source Wiley Online Library Journals Frontfile Complete
subjects Accuracy
Communication
Deep learning
distributed deep learning
Gossip
gossip based
hierarchical communication
large mini‐batch problem
Mathematical models
Network topologies
Parameters
Propagation
segment‐wise
title Crossover‐SGD: A gossip‐based communication in distributed deep learning for alleviating large mini‐batch problem and enhancing scalability
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T00%3A57%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Crossover%E2%80%90SGD:%20A%20gossip%E2%80%90based%20communication%20in%20distributed%20deep%20learning%20for%20alleviating%20large%20mini%E2%80%90batch%20problem%20and%20enhancing%20scalability&rft.jtitle=Concurrency%20and%20computation&rft.au=Yeo,%20Sangho&rft.date=2023-07-10&rft.volume=35&rft.issue=15&rft.epage=n/a&rft.issn=1532-0626&rft.eissn=1532-0634&rft_id=info:doi/10.1002/cpe.7508&rft_dat=%3Cproquest_cross%3E2827386187%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2827386187&rft_id=info:pmid/&rfr_iscdi=true