Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines
Deep learning as a means to inferencing has proliferated thanks to its versatility and ability to approach or exceed human-level accuracy. These computational models have seemingly insatiable appetites for computational resources not only while training, but also when deployed at scales ranging from...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep learning as a means to inferencing has proliferated thanks to its
versatility and ability to approach or exceed human-level accuracy. These
computational models have seemingly insatiable appetites for computational
resources not only while training, but also when deployed at scales ranging
from data centers all the way down to embedded devices. As such, increasing
consideration is being made to maximize the computational efficiency given
limited hardware and energy resources and, as a result, inferencing with
reduced precision has emerged as a viable alternative to the IEEE 754 Standard
for Floating-Point Arithmetic. We propose a quantization scheme that allows
inferencing to be carried out using arithmetic that is fundamentally more
efficient when compared to even half-precision floating-point. Our quantization
procedure is significant in that we determine our quantization scheme
parameters by calibrating against its reference floating-point model using a
single inference batch rather than (re)training and achieve end-to-end post
quantization accuracies comparable to the reference model. |
---|---|
DOI: | 10.48550/arxiv.1805.07941 |