Deep PackGen: A Deep Reinforcement Learning Framework for Adversarial Network Packet Generation
Recent advancements in artificial intelligence (AI) and machine learning (ML) algorithms, coupled with the availability of faster computing infrastructure, have enhanced the security posture of cybersecurity operations centers (defenders) through the development of ML-aided network intrusion detecti...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent advancements in artificial intelligence (AI) and machine learning (ML)
algorithms, coupled with the availability of faster computing infrastructure,
have enhanced the security posture of cybersecurity operations centers
(defenders) through the development of ML-aided network intrusion detection
systems (NIDS). Concurrently, the abilities of adversaries to evade security
have also increased with the support of AI/ML models. Therefore, defenders need
to proactively prepare for evasion attacks that exploit the detection
mechanisms of NIDS. Recent studies have found that the perturbation of
flow-based and packet-based features can deceive ML models, but these
approaches have limitations. Perturbations made to the flow-based features are
difficult to reverse-engineer, while samples generated with perturbations to
the packet-based features are not playable.
Our methodological framework, Deep PackGen, employs deep reinforcement
learning to generate adversarial packets and aims to overcome the limitations
of approaches in the literature. By taking raw malicious network packets as
inputs and systematically making perturbations on them, Deep PackGen
camouflages them as benign packets while still maintaining their functionality.
In our experiments, using publicly available data, Deep PackGen achieved an
average adversarial success rate of 66.4\% against various ML models and across
different attack types. Our investigation also revealed that more than 45\% of
the successful adversarial samples were out-of-distribution packets that evaded
the decision boundaries of the classifiers. The knowledge gained from our study
on the adversary's ability to make specific evasive perturbations to different
types of malicious packets can help defenders enhance the robustness of their
NIDS against evolving adversarial attacks. |
---|---|
DOI: | 10.48550/arxiv.2305.11039 |