Exploring Targeted and Stealthy False Data Injection Attacks via Adversarial Machine Learning
State estimation methods used in cyber-physical systems (CPSs), such as smart grid, are vulnerable to false data injection attacks (FDIAs). Although substantial deep learning methods have been proposed to detect such attacks, deep neural networks (DNNs) are highly susceptible to adversarial attacks,...
Gespeichert in:
Veröffentlicht in: | IEEE internet of things journal 2022-08, Vol.9 (15), p.14116-14125 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | State estimation methods used in cyber-physical systems (CPSs), such as smart grid, are vulnerable to false data injection attacks (FDIAs). Although substantial deep learning methods have been proposed to detect such attacks, deep neural networks (DNNs) are highly susceptible to adversarial attacks, which modify input of DNNs with unnoticeable but malicious perturbations. This article proposes a method to explore targeted and stealthy FDIAs via adversarial machine learning. We pose FDIAs as sparse optimization problems to achieve initial attack objectives and remain stealthy during attacks. We propose a parallel optimization algorithm to efficiently solve the problems and explore additional sparse-state attacks. The experimental results show that for IEEE 14-bus and 118-bus systems, the success rate of two-state sparse attacks with small-scale targets is as high as 80%. In addition, the attack success rate can continue to increase as the number of attack states increases. The proposed attacks demonstrate that attackers can implement attacks that can bypass both bad data detectors and neural network detectors while keeping the initial attack objectives unchanged, which is a critical and urgent security threat in CPS. |
---|---|
ISSN: | 2327-4662 2327-4662 |
DOI: | 10.1109/JIOT.2022.3147040 |