Adversarial Transferability in Wearable Sensor Systems
Machine learning is used for inference and decision making in wearable sensor systems. However, recent studies have found that machine learning algorithms are easily fooled by the addition of adversarial perturbations to their inputs. What is more interesting is that adversarial examples generated f...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Machine learning is used for inference and decision making in wearable sensor
systems. However, recent studies have found that machine learning algorithms
are easily fooled by the addition of adversarial perturbations to their inputs.
What is more interesting is that adversarial examples generated for one machine
learning system is also effective against other systems. This property of
adversarial examples is called transferability. In this work, we take the first
stride in studying adversarial transferability in wearable sensor systems from
the following perspectives: 1) transferability between machine learning
systems, 2) transferability across subjects, 3) transferability across sensor
body locations, and 4) transferability across datasets. We found strong
untargeted transferability in most cases. Targeted attacks were less successful
with success scores from $0\%$ to $80\%$. The transferability of adversarial
examples depends on many factors such as the inclusion of data from all
subjects, sensor body position, number of samples in the dataset, type of
learning algorithm, and the distribution of source and target system dataset.
The transferability of adversarial examples decreases sharply when the data
distribution of the source and target system becomes more distinct. We also
provide guidelines for the community for designing robust sensor systems. |
---|---|
DOI: | 10.48550/arxiv.2003.07982 |