New Bounds For Distributed Mean Estimation and Variance Reduction

We consider the problem of distributed mean estimation (DME), in which $n$ machines are each given a local $d$-dimensional vector $x_v \in \mathbb{R}^d$, and must cooperate to estimate the mean of their inputs $\mu = \frac 1n\sum_{v = 1}^n x_v$, while minimizing total communication cost. DME is a fu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Davies, Peter, Gurunathan, Vijaykrishna, Moshrefi, Niusha, Ashkboos, Saleh, Alistarh, Dan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Davies, Peter
Gurunathan, Vijaykrishna
Moshrefi, Niusha
Ashkboos, Saleh
Alistarh, Dan
description We consider the problem of distributed mean estimation (DME), in which $n$ machines are each given a local $d$-dimensional vector $x_v \in \mathbb{R}^d$, and must cooperate to estimate the mean of their inputs $\mu = \frac 1n\sum_{v = 1}^n x_v$, while minimizing total communication cost. DME is a fundamental construct in distributed machine learning, and there has been considerable work on variants of this problem, especially in the context of distributed variance reduction for stochastic gradients in parallel SGD. Previous work typically assumes an upper bound on the norm of the input vectors, and achieves an error bound in terms of this norm. However, in many real applications, the input vectors are concentrated around the correct output $\mu$, but $\mu$ itself has large norm. In such cases, previous output error bounds perform poorly. In this paper, we show that output error bounds need not depend on input norm. We provide a method of quantization which allows distributed mean estimation to be performed with solution quality dependent only on the distance between inputs, not on input norm, and show an analogous result for distributed variance reduction. The technique is based on a new connection with lattice theory. We also provide lower bounds showing that the communication to error trade-off of our algorithms is asymptotically optimal. As the lattices achieving optimal bounds under $\ell_2$-norm can be computationally impractical, we also present an extension which leverages easy-to-use cubic lattices, and is loose only up to a logarithmic factor in $d$. We show experimentally that our method yields practical improvements for common applications, relative to prior approaches.
doi_str_mv 10.48550/arxiv.2002.09268
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2002_09268</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2002_09268</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-f778380ac98aae03a49e5380fe3534b2a8165bda6e2f99a58e2affc412a89bf23</originalsourceid><addsrcrecordid>eNotj9FqgzAYhXOzi2H3ALtqXkAXE6PJZddpN-hWKNJb-TV_INDFEXVt336269WB78DhfIQ8pyzJlJTsBcLZ_SacMZ4wzXP1SFZfeKKv_eTNQKs-0Dc3jMG104iGfiJ4Wg6j-4bR9Z6CN_QAwYHvkO7RTN0VL8iDheOAT_eMSF2V9fo93u42H-vVNoa8ULEtCiUUg04rAGQCMo1yBhaFFFnLQaW5bA3kyK3WIBVysLbL0rnRreUiIsv_2ZtD8xPmV-HSXF2am4v4AyGBRBo</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>New Bounds For Distributed Mean Estimation and Variance Reduction</title><source>arXiv.org</source><creator>Davies, Peter ; Gurunathan, Vijaykrishna ; Moshrefi, Niusha ; Ashkboos, Saleh ; Alistarh, Dan</creator><creatorcontrib>Davies, Peter ; Gurunathan, Vijaykrishna ; Moshrefi, Niusha ; Ashkboos, Saleh ; Alistarh, Dan</creatorcontrib><description>We consider the problem of distributed mean estimation (DME), in which $n$ machines are each given a local $d$-dimensional vector $x_v \in \mathbb{R}^d$, and must cooperate to estimate the mean of their inputs $\mu = \frac 1n\sum_{v = 1}^n x_v$, while minimizing total communication cost. DME is a fundamental construct in distributed machine learning, and there has been considerable work on variants of this problem, especially in the context of distributed variance reduction for stochastic gradients in parallel SGD. Previous work typically assumes an upper bound on the norm of the input vectors, and achieves an error bound in terms of this norm. However, in many real applications, the input vectors are concentrated around the correct output $\mu$, but $\mu$ itself has large norm. In such cases, previous output error bounds perform poorly. In this paper, we show that output error bounds need not depend on input norm. We provide a method of quantization which allows distributed mean estimation to be performed with solution quality dependent only on the distance between inputs, not on input norm, and show an analogous result for distributed variance reduction. The technique is based on a new connection with lattice theory. We also provide lower bounds showing that the communication to error trade-off of our algorithms is asymptotically optimal. As the lattices achieving optimal bounds under $\ell_2$-norm can be computationally impractical, we also present an extension which leverages easy-to-use cubic lattices, and is loose only up to a logarithmic factor in $d$. We show experimentally that our method yields practical improvements for common applications, relative to prior approaches.</description><identifier>DOI: 10.48550/arxiv.2002.09268</identifier><language>eng</language><subject>Computer Science - Distributed, Parallel, and Cluster Computing ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2020-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2002.09268$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2002.09268$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Davies, Peter</creatorcontrib><creatorcontrib>Gurunathan, Vijaykrishna</creatorcontrib><creatorcontrib>Moshrefi, Niusha</creatorcontrib><creatorcontrib>Ashkboos, Saleh</creatorcontrib><creatorcontrib>Alistarh, Dan</creatorcontrib><title>New Bounds For Distributed Mean Estimation and Variance Reduction</title><description>We consider the problem of distributed mean estimation (DME), in which $n$ machines are each given a local $d$-dimensional vector $x_v \in \mathbb{R}^d$, and must cooperate to estimate the mean of their inputs $\mu = \frac 1n\sum_{v = 1}^n x_v$, while minimizing total communication cost. DME is a fundamental construct in distributed machine learning, and there has been considerable work on variants of this problem, especially in the context of distributed variance reduction for stochastic gradients in parallel SGD. Previous work typically assumes an upper bound on the norm of the input vectors, and achieves an error bound in terms of this norm. However, in many real applications, the input vectors are concentrated around the correct output $\mu$, but $\mu$ itself has large norm. In such cases, previous output error bounds perform poorly. In this paper, we show that output error bounds need not depend on input norm. We provide a method of quantization which allows distributed mean estimation to be performed with solution quality dependent only on the distance between inputs, not on input norm, and show an analogous result for distributed variance reduction. The technique is based on a new connection with lattice theory. We also provide lower bounds showing that the communication to error trade-off of our algorithms is asymptotically optimal. As the lattices achieving optimal bounds under $\ell_2$-norm can be computationally impractical, we also present an extension which leverages easy-to-use cubic lattices, and is loose only up to a logarithmic factor in $d$. We show experimentally that our method yields practical improvements for common applications, relative to prior approaches.</description><subject>Computer Science - Distributed, Parallel, and Cluster Computing</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj9FqgzAYhXOzi2H3ALtqXkAXE6PJZddpN-hWKNJb-TV_INDFEXVt336269WB78DhfIQ8pyzJlJTsBcLZ_SacMZ4wzXP1SFZfeKKv_eTNQKs-0Dc3jMG104iGfiJ4Wg6j-4bR9Z6CN_QAwYHvkO7RTN0VL8iDheOAT_eMSF2V9fo93u42H-vVNoa8ULEtCiUUg04rAGQCMo1yBhaFFFnLQaW5bA3kyK3WIBVysLbL0rnRreUiIsv_2ZtD8xPmV-HSXF2am4v4AyGBRBo</recordid><startdate>20200221</startdate><enddate>20200221</enddate><creator>Davies, Peter</creator><creator>Gurunathan, Vijaykrishna</creator><creator>Moshrefi, Niusha</creator><creator>Ashkboos, Saleh</creator><creator>Alistarh, Dan</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200221</creationdate><title>New Bounds For Distributed Mean Estimation and Variance Reduction</title><author>Davies, Peter ; Gurunathan, Vijaykrishna ; Moshrefi, Niusha ; Ashkboos, Saleh ; Alistarh, Dan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-f778380ac98aae03a49e5380fe3534b2a8165bda6e2f99a58e2affc412a89bf23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Distributed, Parallel, and Cluster Computing</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Davies, Peter</creatorcontrib><creatorcontrib>Gurunathan, Vijaykrishna</creatorcontrib><creatorcontrib>Moshrefi, Niusha</creatorcontrib><creatorcontrib>Ashkboos, Saleh</creatorcontrib><creatorcontrib>Alistarh, Dan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Davies, Peter</au><au>Gurunathan, Vijaykrishna</au><au>Moshrefi, Niusha</au><au>Ashkboos, Saleh</au><au>Alistarh, Dan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>New Bounds For Distributed Mean Estimation and Variance Reduction</atitle><date>2020-02-21</date><risdate>2020</risdate><abstract>We consider the problem of distributed mean estimation (DME), in which $n$ machines are each given a local $d$-dimensional vector $x_v \in \mathbb{R}^d$, and must cooperate to estimate the mean of their inputs $\mu = \frac 1n\sum_{v = 1}^n x_v$, while minimizing total communication cost. DME is a fundamental construct in distributed machine learning, and there has been considerable work on variants of this problem, especially in the context of distributed variance reduction for stochastic gradients in parallel SGD. Previous work typically assumes an upper bound on the norm of the input vectors, and achieves an error bound in terms of this norm. However, in many real applications, the input vectors are concentrated around the correct output $\mu$, but $\mu$ itself has large norm. In such cases, previous output error bounds perform poorly. In this paper, we show that output error bounds need not depend on input norm. We provide a method of quantization which allows distributed mean estimation to be performed with solution quality dependent only on the distance between inputs, not on input norm, and show an analogous result for distributed variance reduction. The technique is based on a new connection with lattice theory. We also provide lower bounds showing that the communication to error trade-off of our algorithms is asymptotically optimal. As the lattices achieving optimal bounds under $\ell_2$-norm can be computationally impractical, we also present an extension which leverages easy-to-use cubic lattices, and is loose only up to a logarithmic factor in $d$. We show experimentally that our method yields practical improvements for common applications, relative to prior approaches.</abstract><doi>10.48550/arxiv.2002.09268</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2002.09268
ispartof
issn
language eng
recordid cdi_arxiv_primary_2002_09268
source arXiv.org
subjects Computer Science - Distributed, Parallel, and Cluster Computing
Computer Science - Learning
Statistics - Machine Learning
title New Bounds For Distributed Mean Estimation and Variance Reduction
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T02%3A14%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=New%20Bounds%20For%20Distributed%20Mean%20Estimation%20and%20Variance%20Reduction&rft.au=Davies,%20Peter&rft.date=2020-02-21&rft_id=info:doi/10.48550/arxiv.2002.09268&rft_dat=%3Carxiv_GOX%3E2002_09268%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true