1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB's Convergence Speed
To train large models (like BERT and GPT-3) on hundreds of GPUs, communication has become a major bottleneck, especially on commodity systems with limited-bandwidth TCP network. On one side large batch-size optimization such as LAMB algorithm was proposed to reduce the frequency of communication. On...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | To train large models (like BERT and GPT-3) on hundreds of GPUs,
communication has become a major bottleneck, especially on commodity systems
with limited-bandwidth TCP network. On one side large batch-size optimization
such as LAMB algorithm was proposed to reduce the frequency of communication.
On the other side, communication compression algorithms such as 1-bit Adam help
to reduce the volume of each communication. However, we find that simply using
one of the techniques is not sufficient to solve the communication challenge,
especially under low network bandwidth. Motivated by this we aim to combine the
power of large-batch optimization and communication compression, but we find
that existing compression strategies cannot be directly applied to LAMB due to
its unique adaptive layerwise learning rates. To this end, we design a new
communication-efficient algorithm, 1-bit LAMB, which introduces a novel way to
support adaptive layerwise learning rates under compression. In addition, we
introduce a new system implementation for compressed communication using the
NCCL backend of PyTorch distributed, which improves both usability and
performance. For BERT-Large pre-training task with batch sizes from 8K to 64K,
our evaluations on up to 256 GPUs demonstrate that 1-bit LAMB with NCCL-based
backend is able to achieve up to 4.6x communication volume reduction, up to
2.8x end-to-end time-wise speedup, and the same sample-wise convergence speed
(and same fine-tuning task accuracy) compared to uncompressed LAMB. |
---|---|
DOI: | 10.48550/arxiv.2104.06069 |