Automated Adversary Emulation for Cyber-Physical Systems via Reinforcement Learning
Adversary emulation is an offensive exercise that provides a comprehensive assessment of a system's resilience against cyber attacks. However, adversary emulation is typically a manual process, making it costly and hard to deploy in cyber-physical systems (CPS) with complex dynamics, vulnerabil...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Adversary emulation is an offensive exercise that provides a comprehensive
assessment of a system's resilience against cyber attacks. However, adversary
emulation is typically a manual process, making it costly and hard to deploy in
cyber-physical systems (CPS) with complex dynamics, vulnerabilities, and
operational uncertainties. In this paper, we develop an automated, domain-aware
approach to adversary emulation for CPS. We formulate a Markov Decision Process
(MDP) model to determine an optimal attack sequence over a hybrid attack graph
with cyber (discrete) and physical (continuous) components and related physical
dynamics. We apply model-based and model-free reinforcement learning (RL)
methods to solve the discrete-continuous MDP in a tractable fashion. As a
baseline, we also develop a greedy attack algorithm and compare it with the RL
procedures. We summarize our findings through a numerical study on sensor
deception attacks in buildings to compare the performance and solution quality
of the proposed algorithms. |
---|---|
DOI: | 10.48550/arxiv.2011.04635 |