PowerGossip: Practical Low-Rank Communication Compression in Decentralized Deep Learning
Lossy gradient compression has become a practical tool to overcome the communication bottleneck in centrally coordinated distributed training of machine learning models. However, algorithms for decentralized training with compressed communication over arbitrary connected networks have been more comp...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Lossy gradient compression has become a practical tool to overcome the
communication bottleneck in centrally coordinated distributed training of
machine learning models. However, algorithms for decentralized training with
compressed communication over arbitrary connected networks have been more
complicated, requiring additional memory and hyperparameters. We introduce a
simple algorithm that directly compresses the model differences between
neighboring workers using low-rank linear compressors applied on model
differences. Inspired by the PowerSGD algorithm for centralized deep learning,
this algorithm uses power iteration steps to maximize the information
transferred per bit. We prove that our method requires no additional
hyperparameters, converges faster than prior methods, and is asymptotically
independent of both the network and the compression. Out of the box, these
compressors perform on par with state-of-the-art tuned compression algorithms
in a series of deep learning benchmarks. |
---|---|
DOI: | 10.48550/arxiv.2008.01425 |