Sparsity-Aware Optimization of In-Memory Bayesian Binary Neural Network Accelerators
Bayesian Neural Networks (BNNs) provide principled estimates of model and data uncertainty by encoding parameters as distributions. This makes them key enablers for reliable AI that can be deployed on safety critical edge systems. These systems can be made resource efficient by restricting synapses...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Bayesian Neural Networks (BNNs) provide principled estimates of model and
data uncertainty by encoding parameters as distributions. This makes them key
enablers for reliable AI that can be deployed on safety critical edge systems.
These systems can be made resource efficient by restricting synapses to two
synaptic states $\{-1,+1\}$ and using a memristive in-memory computing (IMC)
paradigm. However, BNNs pose an additional challenge -- they require multiple
instantiations for ensembling, consuming extra resources in terms of energy and
area. In this work, we propose a novel sparsity-aware optimization for Bayesian
Binary Neural Network (BBNN) accelerators that exploits the inherent BBNN
sampling sparsity -- most of the network is made up of synapses that have a
high probability of being fixed at $\pm1$ and require no sampling. The
optimization scheme proposed here exploits the sampling sparsity that exists
both among layers, i.e only a few layers of the network contain a majority of
the probabilistic synapses, as well as the parameters i.e., a tiny fraction of
parameters in these layers require sampling, reducing total sampled parameter
count further by up to $86\%$. We demonstrate no loss in accuracy or
uncertainty quantification performance for a VGGBinaryConnect network on
CIFAR-100 dataset mapped on a custom sparsity-aware phase change memory (PCM)
based IMC simulator. We also develop a simple drift compensation technique to
demonstrate robustness to drift-induced degradation. Finally, we project
latency, energy, and area for sparsity-aware BNN implementation in both
pipelined and non-pipelined modes. With sparsity-aware implementation, we
estimate upto $5.3 \times$ reduction in area and $8.8\times$ reduction in
energy compared to a non-sparsity-aware implementation. Our approach also
results in $2.9 \times $ more power efficiency compared to the state-of-the-art
BNN accelerator. |
---|---|
DOI: | 10.48550/arxiv.2411.07842 |