Mitigating Adversarial Attack for Compute-in-Memory Accelerator Utilizing On-chip Finetune
Compute-in-memory (CIM) has been proposed to accelerate the convolution neural network (CNN) computation by implementing parallel multiply and accumulation in analog domain. However, the subsequent processing is still preferred to be performed in digital domain. This makes the analog to digital conv...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Compute-in-memory (CIM) has been proposed to accelerate the convolution
neural network (CNN) computation by implementing parallel multiply and
accumulation in analog domain. However, the subsequent processing is still
preferred to be performed in digital domain. This makes the analog to digital
converter (ADC) critical in CIM architectures. One drawback is the ADC error
introduced by process variation. While research efforts are being made to
improve ADC design to reduce the offset, we find that the accuracy loss
introduced by the ADC error could be recovered by model weight finetune. In
addition to compensate ADC offset, on-chip weight finetune could be leveraged
to provide additional protection for adversarial attack that aims to fool the
inference engine with manipulated input samples. Our evaluation results show
that by adapting the model weights to the specific ADC offset pattern to each
chip, the transferability of the adversarial attack is suppressed. For a chip
being attacked by the C&W method, the classification for CIFAR-10 dataset will
drop to almost 0%. However, when applying the similarly generated adversarial
examples to other chips, the accuracy could still maintain more than 62% and
85% accuracy for VGG-8 and DenseNet-40, respectively. |
---|---|
DOI: | 10.48550/arxiv.2104.06377 |