DODEM: DOuble DEfense Mechanism Against Adversarial Attacks Towards Secure Industrial Internet of Things Analytics
Industrial Internet of Things (I-IoT) is a collaboration of devices, sensors, and networking equipment to monitor and collect data from industrial operations. Machine learning (ML) methods use this data to make high-level decisions with minimal human intervention. Data-driven predictive maintenance...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Industrial Internet of Things (I-IoT) is a collaboration of devices, sensors,
and networking equipment to monitor and collect data from industrial
operations. Machine learning (ML) methods use this data to make high-level
decisions with minimal human intervention. Data-driven predictive maintenance
(PDM) is a crucial ML-based I-IoT application to find an optimal maintenance
schedule for industrial assets. The performance of these ML methods can
seriously be threatened by adversarial attacks where an adversary crafts
perturbed data and sends it to the ML model to deteriorate its prediction
performance. The models should be able to stay robust against these attacks
where robustness is measured by how much perturbation in input data affects
model performance. Hence, there is a need for effective defense mechanisms that
can protect these models against adversarial attacks. In this work, we propose
a double defense mechanism to detect and mitigate adversarial attacks in I-IoT
environments. We first detect if there is an adversarial attack on a given
sample using novelty detection algorithms. Then, based on the outcome of our
algorithm, marking an instance as attack or normal, we select adversarial
retraining or standard training to provide a secondary defense layer. If there
is an attack, adversarial retraining provides a more robust model, while we
apply standard training for regular samples. Since we may not know if an attack
will take place, our adaptive mechanism allows us to consider irregular changes
in data. The results show that our double defense strategy is highly efficient
where we can improve model robustness by up to 64.6% and 52% compared to
standard and adversarial retraining, respectively. |
---|---|
DOI: | 10.48550/arxiv.2301.09740 |