Robust IoT Botnet Detection Framework Resilient to Gradient Based Adversarial Attacks

The exponential growth of Internet of Things (IoT) devices has led to an escalation in the number of possible points of attack, making IoT networks vulnerable to a diverse range of threats. In recent years, machine learning and deep learning techniques have been increasingly employed to enhance the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:SN computer science 2024-09, Vol.5 (7), p.870, Article 870
Hauptverfasser: Krishnan, Deepa, Shrinath, Pravin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The exponential growth of Internet of Things (IoT) devices has led to an escalation in the number of possible points of attack, making IoT networks vulnerable to a diverse range of threats. In recent years, machine learning and deep learning techniques have been increasingly employed to enhance the security of IoT systems. Nevertheless, security systems that rely on machine learning and deep learning techniques are susceptible to adversarial attacks. The proposed study introduces a botnet detection framework designed to withstand gradient-based adversarial attacks. A weighted multi-layer perceptron (MLP) model is developed to identify security attacks using the imbalanced NF-BoT-IoT dataset and this model is further used to evaluate the resilience against prominent gradient-based attacks, the fast gradient sign method (FGSM), projected gradient descent (PGD) and backward pass differentiable approximation (BPDA) in both targeted and non-targeted modes. Extensive experiments are conducted to assess the severity of each category of adversarial attack by varying the perturbation level ϵ and analyzing variations in accuracy, precision, recall, F1 Score, and Loss. Additionally, an adversarial attack defense strategy based on a shallow denoising auto-encoder is proposed to enhance the MLP model’s robustness against these attacks. The proposed approach effectively restores the balanced weighted MLP model’s performance to 99.53% accuracy, 99.77% precision, 98.53% recall, and 99.12% F1 Score under FGSM and PGD adversarial attacks. Also, the proposed approach restores the MLP model’s performance to 97.90% accuracy, 98.46% precision, 97.90% recall, and 98.15% F1 Score under BPDA adversarial attacks. A comparative analysis with the existing Adversarial Training approach demonstrates the superior adversarial attack defense capability of the proposed method across various perturbation strengths and attack models.
ISSN:2661-8907
2662-995X
2661-8907
DOI:10.1007/s42979-024-03242-0