Bit-Weight Adjustment for Bridging Uniform and Non-Uniform Quantization to Build Efficient Image Classifiers
Network quantization, which strives to reduce the precision of model parameters and/or features, is one of the most efficient ways to accelerate model inference and reduce memory consumption, particularly for deep models when performing a variety of real-time vision tasks on edge platforms with cons...
Gespeichert in:
Veröffentlicht in: | Electronics (Basel) 2023-12, Vol.12 (24), p.5043 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Network quantization, which strives to reduce the precision of model parameters and/or features, is one of the most efficient ways to accelerate model inference and reduce memory consumption, particularly for deep models when performing a variety of real-time vision tasks on edge platforms with constrained resources. Existing quantization approaches function well when using relatively high bit widths but suffer from a decline in accuracy at ultra-low precision. In this paper, we propose a bit-weight adjustment (BWA) module to bridge uniform and non-uniform quantization, successfully quantizing the model to ultra-low bit widths without bringing about noticeable performance degradation. Given uniformly quantized data, the BWA module adaptively transforms these data into non-uniformly quantized data by simply introducing trainable scaling factors. With the BWA module, we combine uniform and non-uniform quantization in a single network, allowing low-precision networks to benefit from both the hardware friendliness of uniform quantization and the high performance of non-uniform quantization. We optimize the proposed BWA module by directly minimizing the classification loss through end-to-end training. Numerous experiments on the ImageNet and CIFAR-10 datasets reveal that the proposed approach outperforms state-of-the-art approaches across various bit-width settings and can even produce low-precision quantized models that are competitive with their full-precision counterparts. |
---|---|
ISSN: | 2079-9292 2079-9292 |
DOI: | 10.3390/electronics12245043 |