Tree of Attacks: Jailbreaking Black-Box LLMs Automatically
While Large Language Models (LLMs) display versatile functionality, they continue to generate harmful, biased, and toxic content, as demonstrated by the prevalence of human-designed jailbreaks. In this work, we present Tree of Attacks with Pruning (TAP), an automated method for generating jailbreaks...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | While Large Language Models (LLMs) display versatile functionality, they
continue to generate harmful, biased, and toxic content, as demonstrated by the
prevalence of human-designed jailbreaks. In this work, we present Tree of
Attacks with Pruning (TAP), an automated method for generating jailbreaks that
only requires black-box access to the target LLM. TAP utilizes an attacker LLM
to iteratively refine candidate (attack) prompts until one of the refined
prompts jailbreaks the target. In addition, before sending prompts to the
target, TAP assesses them and prunes the ones unlikely to result in jailbreaks,
reducing the number of queries sent to the target LLM. In empirical
evaluations, we observe that TAP generates prompts that jailbreak
state-of-the-art LLMs (including GPT4-Turbo and GPT4o) for more than 80% of the
prompts. This significantly improves upon the previous state-of-the-art
black-box methods for generating jailbreaks while using a smaller number of
queries than them. Furthermore, TAP is also capable of jailbreaking LLMs
protected by state-of-the-art guardrails, e.g., LlamaGuard. |
---|---|
DOI: | 10.48550/arxiv.2312.02119 |