A Review of Confidentiality Threats Against Embedded Neural Network Models
Utilization of Machine Learning (ML) algorithms, especially Deep Neural Network (DNN) models, becomes a widely accepted standard in many domains more particularly IoT-based systems. DNN models reach impressive performances in several sensitive fields such as medical diagnosis, smart transport or sec...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Utilization of Machine Learning (ML) algorithms, especially Deep Neural
Network (DNN) models, becomes a widely accepted standard in many domains more
particularly IoT-based systems. DNN models reach impressive performances in
several sensitive fields such as medical diagnosis, smart transport or security
threat detection, and represent a valuable piece of Intellectual Property. Over
the last few years, a major trend is the large-scale deployment of models in a
wide variety of devices. However, this migration to embedded systems is slowed
down because of the broad spectrum of attacks threatening the integrity,
confidentiality and availability of embedded models. In this review, we cover
the landscape of attacks targeting the confidentiality of embedded DNN models
that may have a major impact on critical IoT systems, with a particular focus
on model extraction and data leakage. We highlight the fact that Side-Channel
Analysis (SCA) is a relatively unexplored bias by which model's confidentiality
can be compromised. Input data, architecture or parameters of a model can be
extracted from power or electromagnetic observations, testifying a real need
from a security point of view. |
---|---|
DOI: | 10.48550/arxiv.2105.01401 |