8-bit Optimizers via Block-wise Quantization
Stateful optimizers maintain gradient statistics over time, e.g., the exponentially smoothed sum (SGD with momentum) or squared sum (Adam) of past gradient values. This state can be used to accelerate optimization compared to plain stochastic gradient descent but uses memory that might otherwise be...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Stateful optimizers maintain gradient statistics over time, e.g., the
exponentially smoothed sum (SGD with momentum) or squared sum (Adam) of past
gradient values. This state can be used to accelerate optimization compared to
plain stochastic gradient descent but uses memory that might otherwise be
allocated to model parameters, thereby limiting the maximum size of models
trained in practice. In this paper, we develop the first optimizers that use
8-bit statistics while maintaining the performance levels of using 32-bit
optimizer states. To overcome the resulting computational, quantization, and
stability challenges, we develop block-wise dynamic quantization. Block-wise
quantization divides input tensors into smaller blocks that are independently
quantized. Each block is processed in parallel across cores, yielding faster
optimization and high precision quantization. To maintain stability and
performance, we combine block-wise quantization with two additional changes:
(1) dynamic quantization, a form of non-linear optimization that is precise for
both large and small magnitude values, and (2) a stable embedding layer to
reduce gradient variance that comes from the highly non-uniform distribution of
input tokens in language models. As a result, our 8-bit optimizers maintain
32-bit performance with a small fraction of the memory footprint on a range of
tasks, including 1.5B parameter language modeling, GLUE finetuning, ImageNet
classification, WMT'14 machine translation, MoCo v2 contrastive ImageNet
pretraining+finetuning, and RoBERTa pretraining, without changes to the
original optimizer hyperparameters. We open-source our 8-bit optimizers as a
drop-in replacement that only requires a two-line code change. |
---|---|
DOI: | 10.48550/arxiv.2110.02861 |