APRIL: towards Scalable and Transferable Autonomous Penetration Testing in Large Action Space via Action Embedding

Penetration testing (pentesting) assesses cybersecurity through simulated attacks, while the conventional manual-based method is costly, time-consuming, and personnelconstrained. Reinforcement learning (RL) provides an agentenvironment interaction learning paradigm, making it a promising way for aut...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on dependable and secure computing 2024-12, p.1-17
Hauptverfasser: Zhou, Shicheng, Liu, Jingju, Lu, Yuliang, Yang, Jiahai, Hou, Dongdong, Zhang, Yue, Hu, Shulong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Penetration testing (pentesting) assesses cybersecurity through simulated attacks, while the conventional manual-based method is costly, time-consuming, and personnelconstrained. Reinforcement learning (RL) provides an agentenvironment interaction learning paradigm, making it a promising way for autonomous pentesting. However, agents' scalability in large action spaces and policy transferability across scenarios limit the applicability of RL-based autonomous pentesting. To address these challenges, we present a novel autonomous pentesting framework based on reinforcement learning (namely APRIL) to train agents that are scalable and transferable in large action spaces. In APRIL, we construct realistic, bounded, host-level state space via embedding techniques to avoid the complexities of dealing with unbounded network-level information. We employ semantic correlations between pentesting actions as prior knowledge to represent discrete action space into a continuous and semantically meaningful embedding space. Agents are then trained to reason over actions within the action embedding space, where two key methods are applied: an upper-confidence bound-based action refinement method to encourage efficient exploration, and a distance-aware loss to improve learning efficiency and generalization performance. We conduct experiments in simulated scenarios constructed based on virtualized vulnerable environments. The results demonstrate APRIL's scalability in large action spaces and its ability to facilitate policy transfer across diverse scenarios.
ISSN:1545-5971
DOI:10.1109/TDSC.2024.3518500