Robustness of ML-Enhanced IDS to Stealthy Adversaries

Intrusion Detection Systems (IDS) enhanced with Machine Learning (ML) have demonstrated the capacity to efficiently build a prototype of "normal" cyber behaviors in order to detect cyber threats' activity with greater accuracy than traditional rule-based IDS. Because these are largely...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-04
Hauptverfasser: Wong, Vance, Emanuello, John
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Intrusion Detection Systems (IDS) enhanced with Machine Learning (ML) have demonstrated the capacity to efficiently build a prototype of "normal" cyber behaviors in order to detect cyber threats' activity with greater accuracy than traditional rule-based IDS. Because these are largely black boxes, their acceptance requires proof of robustness to stealthy adversaries. Since it is impossible to build a baseline from activity completely clean of that of malicious cyber actors (outside of controlled experiments), the training data for deployed models will be poisoned with examples of activity that analysts would want to be alerted about. We train an autoencoder-based anomaly detection system on network activity with various proportions of malicious activity mixed in and demonstrate that they are robust to this sort of poisoning.
ISSN:2331-8422