Vulnerability of Machine Learning Approaches Applied in IoT-Based Smart Grid: A Review

Machine learning (ML) sees an increasing prevalence of being used in the Internet of Things (IoT)-based smart grid. However, the trustworthiness of ML is a severe issue that must be addressed to accommodate the trend of ML-based smart grid applications (MLsgAPPs). The adversarial distortion injected...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE internet of things journal 2024-06, Vol.11 (11), p.18951-18975
Hauptverfasser: Zhang, Zhenyong, Liu, Mengxiang, Sun, Mingyang, Deng, Ruilong, Cheng, Peng, Niyato, Dusit, Chow, Mo-Yuen, Chen, Jiming
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Machine learning (ML) sees an increasing prevalence of being used in the Internet of Things (IoT)-based smart grid. However, the trustworthiness of ML is a severe issue that must be addressed to accommodate the trend of ML-based smart grid applications (MLsgAPPs). The adversarial distortion injected into the power signal will greatly affect the system's normal control and operation. Therefore, it is imperative to conduct vulnerability assessment for MLsgAPPs applied in the safety-critical power systems. In this article, we provide a comprehensive review of the recent progress in designing attack and defense methods for MLsgAPPs. Unlike the traditional survey about ML security, this is the first review work about the security of MLsgAPPs that focuses on the characteristics of power systems. We first highlight the specifics for constructing adversarial attacks on MLsgAPPs. Then, the vulnerability of MLsgAPP is analyzed from the perspective of the power system and ML model, respectively. Afterward, a comprehensive survey is conducted to review and compare existing studies about the adversarial attacks on MLsgAPPs in scenarios of generation, transmission, distribution, and consumption, and the countermeasures are reviewed according to the attacks that they defend against. Finally, the future research directions are discussed on the attacker's and defender's side, respectively. We also analyze the potential vulnerability of large language model-based (e.g., ChatGPT) smart grid applications. Overall, our purpose is to encourage more researchers to contribute to investigating the adversarial issues of MLsgAPPs.
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2024.3349381