Detection of sensors used for adversarial examples against machine learning models
Machine learning (ML) systems that utilize sensor inputs have become integral in various applications; however, they remain vulnerable to sensor-based adversarial example (AE) attacks, where compromised sensors can be exploited to manipulate system outputs. This study addresses the critical issue of...
Gespeichert in:
Veröffentlicht in: | Results in engineering 2024-12, Vol.24, p.103021, Article 103021 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Machine learning (ML) systems that utilize sensor inputs have become integral in various applications; however, they remain vulnerable to sensor-based adversarial example (AE) attacks, where compromised sensors can be exploited to manipulate system outputs. This study addresses the critical issue of safeguarding these systems by identifying and mitigating compromised sensors, thereby enhancing their resilience. This study introduces a novel detection method using a feature-removable model (FRM), which allows the selective removal of features to identify inconsistencies in the model's outputs when different sensor features are altered. The methodology was validated on a human activity recognition (HAR) model utilizing sensors placed on the chest, wrist, and ankle, with a focus on identifying attacker-compromised sensors. The results demonstrated the efficacy of the method, achieving an average Recall of detected sensors of 0.92 and an average Precision of detected sensors of 0.72, highlighting the ability of the approach to accurately detect and identify compromised sensors. This study significantly contributes to advancing the security and robustness of ML systems against sensor-based AE attacks.
•Novel technique to detect compromised sensors in adversarial ML attacks.•Effective use of Feature-removable Model to identify compromised sensors.•Validated on HAR model, showing practical effectiveness in real-world scenarios.•Achieved 0.92 Recall and 0.72 Precision in detecting adversarial threats.•Strengthens security by mitigating sensor-based attacks on ML systems. |
---|---|
ISSN: | 2590-1230 2590-1230 |
DOI: | 10.1016/j.rineng.2024.103021 |