EEG-Defender: Defending against Jailbreak through Early Exit Generation of Large Language Models
Large Language Models (LLMs) are increasingly attracting attention in various applications. Nonetheless, there is a growing concern as some users attempt to exploit these models for malicious purposes, including the synthesis of controlled substances and the propagation of disinformation. In an effo...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) are increasingly attracting attention in various
applications. Nonetheless, there is a growing concern as some users attempt to
exploit these models for malicious purposes, including the synthesis of
controlled substances and the propagation of disinformation. In an effort to
mitigate such risks, the concept of "Alignment" technology has been developed.
However, recent studies indicate that this alignment can be undermined using
sophisticated prompt engineering or adversarial suffixes, a technique known as
"Jailbreak." Our research takes cues from the human-like generate process of
LLMs. We identify that while jailbreaking prompts may yield output logits
similar to benign prompts, their initial embeddings within the model's latent
space tend to be more analogous to those of malicious prompts. Leveraging this
finding, we propose utilizing the early transformer outputs of LLMs as a means
to detect malicious inputs, and terminate the generation immediately. Built
upon this idea, we introduce a simple yet significant defense approach called
EEG-Defender for LLMs. We conduct comprehensive experiments on ten jailbreak
methods across three models. Our results demonstrate that EEG-Defender is
capable of reducing the Attack Success Rate (ASR) by a significant margin,
roughly 85\% in comparison with 50\% for the present SOTAs, with minimal impact
on the utility and effectiveness of LLMs. |
---|---|
DOI: | 10.48550/arxiv.2408.11308 |