Learning diverse attacks on large language models for robust red-teaming and safety tuning

Red-teaming, or identifying prompts that elicit harmful responses, is a critical step in ensuring the safe and responsible deployment of large language models (LLMs). Developing effective protection against many modes of attack prompts requires discovering diverse attacks. Automated red-teaming typi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lee, Seanie, Kim, Minsu, Cherif, Lynn, Dobre, David, Lee, Juho, Hwang, Sung Ju, Kawaguchi, Kenji, Gidel, Gauthier, Bengio, Yoshua, Malkin, Nikolay, Jain, Moksh
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!