PTQ4ViT: Post-training quantization for vision transformers with twin uniform quantization
Quantization is one of the most effective methods to compress neural networks, which has achieved great success on convolutional neural networks (CNNs). Recently, vision transformers have demonstrated great potential in computer vision. However, previous post-training quantization methods performed...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Quantization is one of the most effective methods to compress neural
networks, which has achieved great success on convolutional neural networks
(CNNs). Recently, vision transformers have demonstrated great potential in
computer vision. However, previous post-training quantization methods performed
not well on vision transformer, resulting in more than 1% accuracy drop even in
8-bit quantization. Therefore, we analyze the problems of quantization on
vision transformers. We observe the distributions of activation values after
softmax and GELU functions are quite different from the Gaussian distribution.
We also observe that common quantization metrics, such as MSE and cosine
distance, are inaccurate to determine the optimal scaling factor. In this
paper, we propose the twin uniform quantization method to reduce the
quantization error on these activation values. And we propose to use a Hessian
guided metric to evaluate different scaling factors, which improves the
accuracy of calibration at a small cost. To enable the fast quantization of
vision transformers, we develop an efficient framework, PTQ4ViT. Experiments
show the quantized vision transformers achieve near-lossless prediction
accuracy (less than 0.5% drop at 8-bit quantization) on the ImageNet
classification task. |
---|---|
DOI: | 10.48550/arxiv.2111.12293 |