Neural Precision Polarization: Simplifying Neural Network Inference with Dual-Level Precision
We introduce a precision polarization scheme for DNN inference that utilizes only very low and very high precision levels, assigning low precision to the majority of network weights and activations while reserving high precision paths for targeted error compensation. This separation allows for disti...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We introduce a precision polarization scheme for DNN inference that utilizes
only very low and very high precision levels, assigning low precision to the
majority of network weights and activations while reserving high precision
paths for targeted error compensation. This separation allows for distinct
optimization of each precision level, thereby reducing memory and computation
demands without compromising model accuracy. In the discussed approach, a
floating-point model can be trained in the cloud and then downloaded to an edge
device, where network weights and activations are directly quantized to meet
the edge devices' desired level, such as NF4 or INT8. To address accuracy loss
from quantization, surrogate paths are introduced, leveraging low-rank
approximations on a layer-by-layer basis. These paths are trained with a
sensitivity-based metric on minimal training data to recover accuracy loss
under quantization as well as due to process variability, such as when the main
prediction path is implemented using analog acceleration. Our simulation
results show that neural precision polarization enables approximately 464 TOPS
per Watt MAC efficiency and reliability by integrating rank-8 error recovery
paths with highly efficient, though potentially unreliable, bit plane-wise
compute-in-memory processing. |
---|---|
DOI: | 10.48550/arxiv.2411.05845 |