Open DNN Box by Power Side-Channel Attack
Deep neural networks are becoming popular and important assets of many AI companies. However, recent studies indicate that they are also vulnerable to adversarial attacks. Adversarial attacks can be either white-box or black-box. The white-box attacks assume full knowledge of the models while the bl...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep neural networks are becoming popular and important assets of many AI
companies. However, recent studies indicate that they are also vulnerable to
adversarial attacks. Adversarial attacks can be either white-box or black-box.
The white-box attacks assume full knowledge of the models while the black-box
ones assume none. In general, revealing more internal information can enable
much more powerful and efficient attacks. However, in most real-world
applications, the internal information of embedded AI devices is unavailable,
i.e., they are black-box. Therefore, in this work, we propose a side-channel
information based technique to reveal the internal information of black-box
models. Specifically, we have made the following contributions: (1) we are the
first to use side-channel information to reveal internal network architecture
in embedded devices; (2) we are the first to construct models for internal
parameter estimation; and (3) we validate our methods on real-world devices and
applications. The experimental results show that our method can achieve 96.50\%
accuracy on average. Such results suggest that we should pay strong attention
to the security problem of many AI applications, and further propose
corresponding defensive strategies in the future. |
---|---|
DOI: | 10.48550/arxiv.1907.10406 |