Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural Networks
Quantized neural networks typically require smaller memory footprints and lower computation complexity, which is crucial for efficient deployment. However, quantization inevitably leads to a distribution divergence from the original network, which generally degrades the performance. To tackle this i...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Quantized neural networks typically require smaller memory footprints and
lower computation complexity, which is crucial for efficient deployment.
However, quantization inevitably leads to a distribution divergence from the
original network, which generally degrades the performance. To tackle this
issue, massive efforts have been made, but most existing approaches lack
statistical considerations and depend on several manual configurations. In this
paper, we present an adaptive-mapping quantization method to learn an optimal
latent sub-distribution that is inherent within models and smoothly
approximated with a concrete Gaussian Mixture (GM). In particular, the network
weights are projected in compliance with the GM-approximated sub-distribution.
This sub-distribution evolves along with the weight update in a co-tuning
schema guided by the direct task-objective optimization. Sufficient experiments
on image classification and object detection over various modern architectures
demonstrate the effectiveness, generalization property, and transferability of
the proposed method. Besides, an efficient deployment flow for the mobile CPU
is developed, achieving up to 7.46$\times$ inference acceleration on an
octa-core ARM CPU. Our codes have been publicly released at
\url{https://github.com/RunpeiDong/DGMS}. |
---|---|
DOI: | 10.48550/arxiv.2112.15139 |