Learned Step Size Quantization
Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases. Here, we present a method for training such networks, Learned Step Size Quantiz...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep networks run with low precision operations at inference time offer power
and space advantages over high precision alternatives, but need to overcome the
challenge of maintaining high accuracy as precision decreases. Here, we present
a method for training such networks, Learned Step Size Quantization, that
achieves the highest accuracy to date on the ImageNet dataset when using
models, from a variety of architectures, with weights and activations quantized
to 2-, 3- or 4-bits of precision, and that can train 3-bit models that reach
full precision baseline accuracy. Our approach builds upon existing methods for
learning weights in quantized networks by improving how the quantizer itself is
configured. Specifically, we introduce a novel means to estimate and scale the
task loss gradient at each weight and activation layer's quantizer step size,
such that it can be learned in conjunction with other network parameters. This
approach works using different levels of precision as needed for a given system
and requires only a simple modification of existing training code. |
---|---|
DOI: | 10.48550/arxiv.1902.08153 |