Post-training Quantization with Multiple Points: Mixed Precision without Mixed Precision
We consider the post-training quantization problem, which discretizes the weights of pre-trained deep neural networks without re-training the model. We propose multipoint quantization, a quantization method that approximates a full-precision weight vector using a linear combination of multiple vecto...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We consider the post-training quantization problem, which discretizes the
weights of pre-trained deep neural networks without re-training the model. We
propose multipoint quantization, a quantization method that approximates a
full-precision weight vector using a linear combination of multiple vectors of
low-bit numbers; this is in contrast to typical quantization methods that
approximate each weight using a single low precision number. Computationally,
we construct the multipoint quantization with an efficient greedy selection
procedure, and adaptively decides the number of low precision points on each
quantized weight vector based on the error of its output. This allows us to
achieve higher precision levels for important weights that greatly influence
the outputs, yielding an 'effect of mixed precision' but without physical mixed
precision implementations (which requires specialized hardware accelerators).
Empirically, our method can be implemented by common operands, bringing almost
no memory and computation overhead. We show that our method outperforms a range
of state-of-the-art methods on ImageNet classification and it can be
generalized to more challenging tasks like PASCAL VOC object detection. |
---|---|
DOI: | 10.48550/arxiv.2002.09049 |