PowerGossip: Practical Low-Rank Communication Compression in Decentralized Deep Learning

Lossy gradient compression has become a practical tool to overcome the communication bottleneck in centrally coordinated distributed training of machine learning models. However, algorithms for decentralized training with compressed communication over arbitrary connected networks have been more comp...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Vogels, Thijs, Karimireddy, Sai Praneeth, Jaggi, Martin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Vogels, Thijs
Karimireddy, Sai Praneeth
Jaggi, Martin
description Lossy gradient compression has become a practical tool to overcome the communication bottleneck in centrally coordinated distributed training of machine learning models. However, algorithms for decentralized training with compressed communication over arbitrary connected networks have been more complicated, requiring additional memory and hyperparameters. We introduce a simple algorithm that directly compresses the model differences between neighboring workers using low-rank linear compressors applied on model differences. Inspired by the PowerSGD algorithm for centralized deep learning, this algorithm uses power iteration steps to maximize the information transferred per bit. We prove that our method requires no additional hyperparameters, converges faster than prior methods, and is asymptotically independent of both the network and the compression. Out of the box, these compressors perform on par with state-of-the-art tuned compression algorithms in a series of deep learning benchmarks.
doi_str_mv 10.48550/arxiv.2008.01425
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2008_01425</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2008_01425</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-a82e28368f5c588accf5bbc91e9b793aea978a49e2d493e065543af35ec2fa5b3</originalsourceid><addsrcrecordid>eNotj81OhDAUhbtxYUYfwJW8AFhaLrTuDOpoQuLEzMIduXRuTSO0pKCjPr3MOKvzk5OTfIxd5TwrFAC_wfjtvjLBucp4Xgg4Z2-bsKe4DtPkxttkE9HMzmCfNGGfvqL_SOowDJ9-6WYX_CGNkZbx4p1P7smQnyP27pd2S6IxaQijd_79gp1Z7Ce6POmKbR8ftvVT2rysn-u7JsWyghSVIKFkqSwYUAqNsdB1Rueku0pLJNSVwkKT2BVaEi8BColWAhlhETq5Ytf_t0e0doxuwPjTHhDbI6L8A102TV4</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>PowerGossip: Practical Low-Rank Communication Compression in Decentralized Deep Learning</title><source>arXiv.org</source><creator>Vogels, Thijs ; Karimireddy, Sai Praneeth ; Jaggi, Martin</creator><creatorcontrib>Vogels, Thijs ; Karimireddy, Sai Praneeth ; Jaggi, Martin</creatorcontrib><description>Lossy gradient compression has become a practical tool to overcome the communication bottleneck in centrally coordinated distributed training of machine learning models. However, algorithms for decentralized training with compressed communication over arbitrary connected networks have been more complicated, requiring additional memory and hyperparameters. We introduce a simple algorithm that directly compresses the model differences between neighboring workers using low-rank linear compressors applied on model differences. Inspired by the PowerSGD algorithm for centralized deep learning, this algorithm uses power iteration steps to maximize the information transferred per bit. We prove that our method requires no additional hyperparameters, converges faster than prior methods, and is asymptotically independent of both the network and the compression. Out of the box, these compressors perform on par with state-of-the-art tuned compression algorithms in a series of deep learning benchmarks.</description><identifier>DOI: 10.48550/arxiv.2008.01425</identifier><language>eng</language><subject>Computer Science - Distributed, Parallel, and Cluster Computing ; Computer Science - Learning ; Mathematics - Optimization and Control ; Statistics - Machine Learning</subject><creationdate>2020-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2008.01425$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2008.01425$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Vogels, Thijs</creatorcontrib><creatorcontrib>Karimireddy, Sai Praneeth</creatorcontrib><creatorcontrib>Jaggi, Martin</creatorcontrib><title>PowerGossip: Practical Low-Rank Communication Compression in Decentralized Deep Learning</title><description>Lossy gradient compression has become a practical tool to overcome the communication bottleneck in centrally coordinated distributed training of machine learning models. However, algorithms for decentralized training with compressed communication over arbitrary connected networks have been more complicated, requiring additional memory and hyperparameters. We introduce a simple algorithm that directly compresses the model differences between neighboring workers using low-rank linear compressors applied on model differences. Inspired by the PowerSGD algorithm for centralized deep learning, this algorithm uses power iteration steps to maximize the information transferred per bit. We prove that our method requires no additional hyperparameters, converges faster than prior methods, and is asymptotically independent of both the network and the compression. Out of the box, these compressors perform on par with state-of-the-art tuned compression algorithms in a series of deep learning benchmarks.</description><subject>Computer Science - Distributed, Parallel, and Cluster Computing</subject><subject>Computer Science - Learning</subject><subject>Mathematics - Optimization and Control</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81OhDAUhbtxYUYfwJW8AFhaLrTuDOpoQuLEzMIduXRuTSO0pKCjPr3MOKvzk5OTfIxd5TwrFAC_wfjtvjLBucp4Xgg4Z2-bsKe4DtPkxttkE9HMzmCfNGGfvqL_SOowDJ9-6WYX_CGNkZbx4p1P7smQnyP27pd2S6IxaQijd_79gp1Z7Ce6POmKbR8ftvVT2rysn-u7JsWyghSVIKFkqSwYUAqNsdB1Rueku0pLJNSVwkKT2BVaEi8BColWAhlhETq5Ytf_t0e0doxuwPjTHhDbI6L8A102TV4</recordid><startdate>20200804</startdate><enddate>20200804</enddate><creator>Vogels, Thijs</creator><creator>Karimireddy, Sai Praneeth</creator><creator>Jaggi, Martin</creator><scope>AKY</scope><scope>AKZ</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200804</creationdate><title>PowerGossip: Practical Low-Rank Communication Compression in Decentralized Deep Learning</title><author>Vogels, Thijs ; Karimireddy, Sai Praneeth ; Jaggi, Martin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-a82e28368f5c588accf5bbc91e9b793aea978a49e2d493e065543af35ec2fa5b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Distributed, Parallel, and Cluster Computing</topic><topic>Computer Science - Learning</topic><topic>Mathematics - Optimization and Control</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Vogels, Thijs</creatorcontrib><creatorcontrib>Karimireddy, Sai Praneeth</creatorcontrib><creatorcontrib>Jaggi, Martin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Mathematics</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Vogels, Thijs</au><au>Karimireddy, Sai Praneeth</au><au>Jaggi, Martin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>PowerGossip: Practical Low-Rank Communication Compression in Decentralized Deep Learning</atitle><date>2020-08-04</date><risdate>2020</risdate><abstract>Lossy gradient compression has become a practical tool to overcome the communication bottleneck in centrally coordinated distributed training of machine learning models. However, algorithms for decentralized training with compressed communication over arbitrary connected networks have been more complicated, requiring additional memory and hyperparameters. We introduce a simple algorithm that directly compresses the model differences between neighboring workers using low-rank linear compressors applied on model differences. Inspired by the PowerSGD algorithm for centralized deep learning, this algorithm uses power iteration steps to maximize the information transferred per bit. We prove that our method requires no additional hyperparameters, converges faster than prior methods, and is asymptotically independent of both the network and the compression. Out of the box, these compressors perform on par with state-of-the-art tuned compression algorithms in a series of deep learning benchmarks.</abstract><doi>10.48550/arxiv.2008.01425</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2008.01425
ispartof
issn
language eng
recordid cdi_arxiv_primary_2008_01425
source arXiv.org
subjects Computer Science - Distributed, Parallel, and Cluster Computing
Computer Science - Learning
Mathematics - Optimization and Control
Statistics - Machine Learning
title PowerGossip: Practical Low-Rank Communication Compression in Decentralized Deep Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T06%3A53%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=PowerGossip:%20Practical%20Low-Rank%20Communication%20Compression%20in%20Decentralized%20Deep%20Learning&rft.au=Vogels,%20Thijs&rft.date=2020-08-04&rft_id=info:doi/10.48550/arxiv.2008.01425&rft_dat=%3Carxiv_GOX%3E2008_01425%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true