Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation
Universal Adversarial Perturbations are image-agnostic and model-independent noise that when added with any image can mislead the trained Deep Convolutional Neural Networks into the wrong prediction. Since these Universal Adversarial Perturbations can seriously jeopardize the security and integrity...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Universal Adversarial Perturbations are image-agnostic and model-independent
noise that when added with any image can mislead the trained Deep Convolutional
Neural Networks into the wrong prediction. Since these Universal Adversarial
Perturbations can seriously jeopardize the security and integrity of practical
Deep Learning applications, existing techniques use additional neural networks
to detect the existence of these noises at the input image source. In this
paper, we demonstrate an attack strategy that when activated by rogue means
(e.g., malware, trojan) can bypass these existing countermeasures by augmenting
the adversarial noise at the AI hardware accelerator stage. We demonstrate the
accelerator-level universal adversarial noise attack on several deep Learning
models using co-simulation of the software kernel of Conv2D function and the
Verilog RTL model of the hardware under the FuseSoC environment. |
---|---|
DOI: | 10.48550/arxiv.2111.09488 |