Distributed SLIDE: Enabling Training Large Neural Networks on Low Bandwidth and Simple CPU-Clusters via Model Parallelism and Sparsity
More than 70% of cloud computing is paid for but sits idle. A large fraction of these idle compute are cheap CPUs with few cores that are not utilized during the less busy hours. This paper aims to enable those CPU cycles to train heavyweight AI models. Our goal is against mainstream frameworks, whi...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | More than 70% of cloud computing is paid for but sits idle. A large fraction
of these idle compute are cheap CPUs with few cores that are not utilized
during the less busy hours. This paper aims to enable those CPU cycles to train
heavyweight AI models. Our goal is against mainstream frameworks, which focus
on leveraging expensive specialized ultra-high bandwidth interconnect to
address the communication bottleneck in distributed neural network training.
This paper presents a distributed model-parallel training framework that
enables training large neural networks on small CPU clusters with low Internet
bandwidth. We build upon the adaptive sparse training framework introduced by
the SLIDE algorithm. By carefully deploying sparsity over distributed nodes, we
demonstrate several orders of magnitude faster model parallel training than
Horovod, the main engine behind most commercial software. We show that with
reduced communication, due to sparsity, we can train close to a billion
parameter model on simple 4-16 core CPU nodes connected by basic low bandwidth
interconnect. Moreover, the training time is at par with some of the best
hardware accelerators. |
---|---|
DOI: | 10.48550/arxiv.2201.12667 |