rTop-k: A Statistical Estimation Approach to Distributed SGD
The large communication cost for exchanging gradients between different nodes significantly limits the scalability of distributed training for large-scale learning models. Motivated by this observation, there has been significant recent interest in techniques that reduce the communication cost of di...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Barnes, Leighton Pate Inan, Huseyin A Isik, Berivan Ozgur, Ayfer |
description | The large communication cost for exchanging gradients between different nodes
significantly limits the scalability of distributed training for large-scale
learning models. Motivated by this observation, there has been significant
recent interest in techniques that reduce the communication cost of distributed
Stochastic Gradient Descent (SGD), with gradient sparsification techniques such
as top-k and random-k shown to be particularly effective. The same observation
has also motivated a separate line of work in distributed statistical
estimation theory focusing on the impact of communication constraints on the
estimation efficiency of different statistical models. The primary goal of this
paper is to connect these two research lines and demonstrate how statistical
estimation models and their analysis can lead to new insights in the design of
communication-efficient training techniques. We propose a simple statistical
estimation model for the stochastic gradients which captures the sparsity and
skewness of their distribution. The statistically optimal communication scheme
arising from the analysis of this model leads to a new sparsification technique
for SGD, which concatenates random-k and top-k, considered separately in the
prior literature. We show through extensive experiments on both image and
language domains with CIFAR-10, ImageNet, and Penn Treebank datasets that the
concatenated application of these two sparsification methods consistently and
significantly outperforms either method applied alone. |
doi_str_mv | 10.48550/arxiv.2005.10761 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2005_10761</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2005_10761</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-3449f2726db8aa3205383a8ae4883c78d31f397e019792d5dd9cbfde213c67953</originalsourceid><addsrcrecordid>eNotj8tuwjAURL1hUUE_oCv8Awm2bxzbiE0ElCIhsSD76MZ2RMQjkXER_fumlNVoNNLoHEI-OEszLSWbYXi091QwJlPOVM7fyCKUXZ-c5rSgh4ixvcXW4pmuh7wMtbvSou9Dh_ZIY0dXwx7a-jt6Rw-b1YSMGjzf_Psrx6T8XJfLr2S332yXxS7BXPEEssw0Qonc1RoRBJOgATX6TGuwSjvgDRjlGTfKCCedM7ZunBccbK6MhDGZ_t8-8as-DGjhp_rTqJ4a8AsRxUDl</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>rTop-k: A Statistical Estimation Approach to Distributed SGD</title><source>arXiv.org</source><creator>Barnes, Leighton Pate ; Inan, Huseyin A ; Isik, Berivan ; Ozgur, Ayfer</creator><creatorcontrib>Barnes, Leighton Pate ; Inan, Huseyin A ; Isik, Berivan ; Ozgur, Ayfer</creatorcontrib><description>The large communication cost for exchanging gradients between different nodes
significantly limits the scalability of distributed training for large-scale
learning models. Motivated by this observation, there has been significant
recent interest in techniques that reduce the communication cost of distributed
Stochastic Gradient Descent (SGD), with gradient sparsification techniques such
as top-k and random-k shown to be particularly effective. The same observation
has also motivated a separate line of work in distributed statistical
estimation theory focusing on the impact of communication constraints on the
estimation efficiency of different statistical models. The primary goal of this
paper is to connect these two research lines and demonstrate how statistical
estimation models and their analysis can lead to new insights in the design of
communication-efficient training techniques. We propose a simple statistical
estimation model for the stochastic gradients which captures the sparsity and
skewness of their distribution. The statistically optimal communication scheme
arising from the analysis of this model leads to a new sparsification technique
for SGD, which concatenates random-k and top-k, considered separately in the
prior literature. We show through extensive experiments on both image and
language domains with CIFAR-10, ImageNet, and Penn Treebank datasets that the
concatenated application of these two sparsification methods consistently and
significantly outperforms either method applied alone.</description><identifier>DOI: 10.48550/arxiv.2005.10761</identifier><language>eng</language><subject>Computer Science - Information Theory ; Computer Science - Learning ; Mathematics - Information Theory ; Mathematics - Statistics Theory ; Statistics - Machine Learning ; Statistics - Theory</subject><creationdate>2020-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2005.10761$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2005.10761$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Barnes, Leighton Pate</creatorcontrib><creatorcontrib>Inan, Huseyin A</creatorcontrib><creatorcontrib>Isik, Berivan</creatorcontrib><creatorcontrib>Ozgur, Ayfer</creatorcontrib><title>rTop-k: A Statistical Estimation Approach to Distributed SGD</title><description>The large communication cost for exchanging gradients between different nodes
significantly limits the scalability of distributed training for large-scale
learning models. Motivated by this observation, there has been significant
recent interest in techniques that reduce the communication cost of distributed
Stochastic Gradient Descent (SGD), with gradient sparsification techniques such
as top-k and random-k shown to be particularly effective. The same observation
has also motivated a separate line of work in distributed statistical
estimation theory focusing on the impact of communication constraints on the
estimation efficiency of different statistical models. The primary goal of this
paper is to connect these two research lines and demonstrate how statistical
estimation models and their analysis can lead to new insights in the design of
communication-efficient training techniques. We propose a simple statistical
estimation model for the stochastic gradients which captures the sparsity and
skewness of their distribution. The statistically optimal communication scheme
arising from the analysis of this model leads to a new sparsification technique
for SGD, which concatenates random-k and top-k, considered separately in the
prior literature. We show through extensive experiments on both image and
language domains with CIFAR-10, ImageNet, and Penn Treebank datasets that the
concatenated application of these two sparsification methods consistently and
significantly outperforms either method applied alone.</description><subject>Computer Science - Information Theory</subject><subject>Computer Science - Learning</subject><subject>Mathematics - Information Theory</subject><subject>Mathematics - Statistics Theory</subject><subject>Statistics - Machine Learning</subject><subject>Statistics - Theory</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tuwjAURL1hUUE_oCv8Awm2bxzbiE0ElCIhsSD76MZ2RMQjkXER_fumlNVoNNLoHEI-OEszLSWbYXi091QwJlPOVM7fyCKUXZ-c5rSgh4ixvcXW4pmuh7wMtbvSou9Dh_ZIY0dXwx7a-jt6Rw-b1YSMGjzf_Psrx6T8XJfLr2S332yXxS7BXPEEssw0Qonc1RoRBJOgATX6TGuwSjvgDRjlGTfKCCedM7ZunBccbK6MhDGZ_t8-8as-DGjhp_rTqJ4a8AsRxUDl</recordid><startdate>20200521</startdate><enddate>20200521</enddate><creator>Barnes, Leighton Pate</creator><creator>Inan, Huseyin A</creator><creator>Isik, Berivan</creator><creator>Ozgur, Ayfer</creator><scope>AKY</scope><scope>AKZ</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200521</creationdate><title>rTop-k: A Statistical Estimation Approach to Distributed SGD</title><author>Barnes, Leighton Pate ; Inan, Huseyin A ; Isik, Berivan ; Ozgur, Ayfer</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-3449f2726db8aa3205383a8ae4883c78d31f397e019792d5dd9cbfde213c67953</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Information Theory</topic><topic>Computer Science - Learning</topic><topic>Mathematics - Information Theory</topic><topic>Mathematics - Statistics Theory</topic><topic>Statistics - Machine Learning</topic><topic>Statistics - Theory</topic><toplevel>online_resources</toplevel><creatorcontrib>Barnes, Leighton Pate</creatorcontrib><creatorcontrib>Inan, Huseyin A</creatorcontrib><creatorcontrib>Isik, Berivan</creatorcontrib><creatorcontrib>Ozgur, Ayfer</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Mathematics</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Barnes, Leighton Pate</au><au>Inan, Huseyin A</au><au>Isik, Berivan</au><au>Ozgur, Ayfer</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>rTop-k: A Statistical Estimation Approach to Distributed SGD</atitle><date>2020-05-21</date><risdate>2020</risdate><abstract>The large communication cost for exchanging gradients between different nodes
significantly limits the scalability of distributed training for large-scale
learning models. Motivated by this observation, there has been significant
recent interest in techniques that reduce the communication cost of distributed
Stochastic Gradient Descent (SGD), with gradient sparsification techniques such
as top-k and random-k shown to be particularly effective. The same observation
has also motivated a separate line of work in distributed statistical
estimation theory focusing on the impact of communication constraints on the
estimation efficiency of different statistical models. The primary goal of this
paper is to connect these two research lines and demonstrate how statistical
estimation models and their analysis can lead to new insights in the design of
communication-efficient training techniques. We propose a simple statistical
estimation model for the stochastic gradients which captures the sparsity and
skewness of their distribution. The statistically optimal communication scheme
arising from the analysis of this model leads to a new sparsification technique
for SGD, which concatenates random-k and top-k, considered separately in the
prior literature. We show through extensive experiments on both image and
language domains with CIFAR-10, ImageNet, and Penn Treebank datasets that the
concatenated application of these two sparsification methods consistently and
significantly outperforms either method applied alone.</abstract><doi>10.48550/arxiv.2005.10761</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2005.10761 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2005_10761 |
source | arXiv.org |
subjects | Computer Science - Information Theory Computer Science - Learning Mathematics - Information Theory Mathematics - Statistics Theory Statistics - Machine Learning Statistics - Theory |
title | rTop-k: A Statistical Estimation Approach to Distributed SGD |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T19%3A34%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=rTop-k:%20A%20Statistical%20Estimation%20Approach%20to%20Distributed%20SGD&rft.au=Barnes,%20Leighton%20Pate&rft.date=2020-05-21&rft_id=info:doi/10.48550/arxiv.2005.10761&rft_dat=%3Carxiv_GOX%3E2005_10761%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |