CyberShapley: Explanation, prioritization, and triage of cybersecurity alerts using informative graph representation
In recent years, the field of cybersecurity has seen significant advancements in the ability to detect anomalies and cyberattacks. This progress can be attributed to the use of deep learning (DL) models. Despite their superior performance, such models are imperfect, and their complex architecture ma...
Gespeichert in:
Veröffentlicht in: | Computers & security 2025-03, Vol.150, p.104270, Article 104270 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In recent years, the field of cybersecurity has seen significant advancements in the ability to detect anomalies and cyberattacks. This progress can be attributed to the use of deep learning (DL) models. Despite their superior performance, such models are imperfect, and their complex architecture makes them opaque and uninterpretable. Therefore, security analysts cannot effectively analyze the alerts generated by these models. Recently proposed methods that provide an explanation for the predictions of DL-based anomaly detectors tend to focus on the models’ low-level input features which necessitate further analysis to understand the alerts. As a result, when triaging alerts, security analysts spend a great deal of time analyzing the alerts before making a decision whether and how to act. To address this issue and ensure that the explanations produced for DL models’ output are beneficial to security analysts, we propose CyberShapley, an XAI approach that aims to enhance the interpretability of alerts generated by anomaly detectors by providing user-friendly explanations for the decisions made by these models. We evaluated our method on an LSTM-based anomaly detection model that raises alerts on the anomalous event sequences in the DARPA Engagement #3 and PublicArena datasets. Our method explains the anomalous event sequences associated with alerts by visualizing them as human-interpretable subgraphs (i.e., connected components) and highlighting (prioritizing) the most important components. Consequently, analysts can easily triage the event sequences by focusing on the components with high importance while disregarding the components with low importance. |
---|---|
ISSN: | 0167-4048 |
DOI: | 10.1016/j.cose.2024.104270 |