Learning diverse attacks on large language models for robust red-teaming and safety tuning
Red-teaming, or identifying prompts that elicit harmful responses, is a critical step in ensuring the safe and responsible deployment of large language models (LLMs). Developing effective protection against many modes of attack prompts requires discovering diverse attacks. Automated red-teaming typi...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Red-teaming, or identifying prompts that elicit harmful responses, is a
critical step in ensuring the safe and responsible deployment of large language
models (LLMs). Developing effective protection against many modes of attack
prompts requires discovering diverse attacks. Automated red-teaming typically
uses reinforcement learning to fine-tune an attacker language model to generate
prompts that elicit undesirable responses from a target LLM, as measured, for
example, by an auxiliary toxicity classifier. We show that even with explicit
regularization to favor novelty and diversity, existing approaches suffer from
mode collapse or fail to generate effective attacks. As a flexible and
probabilistically principled alternative, we propose to use GFlowNet
fine-tuning, followed by a secondary smoothing phase, to train the attacker
model to generate diverse and effective attack prompts. We find that the
attacks generated by our method are effective against a wide range of target
LLMs, both with and without safety tuning, and transfer well between target
LLMs. Finally, we demonstrate that models safety-tuned using a dataset of
red-teaming prompts generated by our method are robust to attacks from other
RL-based red-teaming approaches. |
---|---|
DOI: | 10.48550/arxiv.2405.18540 |