QBitOpt: Fast and Accurate Bitwidth Reallocation during Training
Quantizing neural networks is one of the most effective methods for achieving efficient inference on mobile and embedded devices. In particular, mixed precision quantized (MPQ) networks, whose layers can be quantized to different bitwidths, achieve better task performance for the same resource const...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Quantizing neural networks is one of the most effective methods for achieving
efficient inference on mobile and embedded devices. In particular, mixed
precision quantized (MPQ) networks, whose layers can be quantized to different
bitwidths, achieve better task performance for the same resource constraint
compared to networks with homogeneous bitwidths. However, finding the optimal
bitwidth allocation is a challenging problem as the search space grows
exponentially with the number of layers in the network. In this paper, we
propose QBitOpt, a novel algorithm for updating bitwidths during
quantization-aware training (QAT). We formulate the bitwidth allocation problem
as a constraint optimization problem. By combining fast-to-compute
sensitivities with efficient solvers during QAT, QBitOpt can produce
mixed-precision networks with high task performance guaranteed to satisfy
strict resource constraints. This contrasts with existing mixed-precision
methods that learn bitwidths using gradients and cannot provide such
guarantees. We evaluate QBitOpt on ImageNet and confirm that we outperform
existing fixed and mixed-precision methods under average bitwidth constraints
commonly found in the literature. |
---|---|
DOI: | 10.48550/arxiv.2307.04535 |