Hacking Back the AI-Hacker: Prompt Injection as a Defense Against LLM-driven Cyberattacks
Large language models (LLMs) are increasingly being harnessed to automate cyberattacks, making sophisticated exploits more accessible and scalable. In response, we propose a new defense strategy tailored to counter LLM-driven cyberattacks. We introduce Mantis, a defensive framework that exploits LLM...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) are increasingly being harnessed to automate
cyberattacks, making sophisticated exploits more accessible and scalable. In
response, we propose a new defense strategy tailored to counter LLM-driven
cyberattacks. We introduce Mantis, a defensive framework that exploits LLMs'
susceptibility to adversarial inputs to undermine malicious operations. Upon
detecting an automated cyberattack, Mantis plants carefully crafted inputs into
system responses, leading the attacker's LLM to disrupt their own operations
(passive defense) or even compromise the attacker's machine (active defense).
By deploying purposefully vulnerable decoy services to attract the attacker and
using dynamic prompt injections for the attacker's LLM, Mantis can autonomously
hack back the attacker. In our experiments, Mantis consistently achieved over
95% effectiveness against automated LLM-driven attacks. To foster further
research and collaboration, Mantis is available as an open-source tool:
https://github.com/pasquini-dario/project_mantis |
---|---|
DOI: | 10.48550/arxiv.2410.20911 |