Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment
With the widespread deployment of Multimodal Large Language Models (MLLMs) for visual-reasoning tasks, improving their safety has become crucial. Recent research indicates that despite training-time safety alignment, these models remain vulnerable to jailbreak attacks. In this work, we first highlig...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the widespread deployment of Multimodal Large Language Models (MLLMs)
for visual-reasoning tasks, improving their safety has become crucial. Recent
research indicates that despite training-time safety alignment, these models
remain vulnerable to jailbreak attacks. In this work, we first highlight an
important safety gap to describe that alignment achieved solely through safety
training may be insufficient against jailbreak attacks. To address this
vulnerability, we propose Immune, an inference-time defense framework that
leverages a safe reward model through controlled decoding to defend against
jailbreak attacks. Additionally, we provide a mathematical characterization of
Immune, offering provable guarantees against jailbreaks. Extensive evaluations
on diverse jailbreak benchmarks using recent MLLMs reveal that Immune
effectively enhances model safety while preserving the model's original
capabilities. For instance, against text-based jailbreak attacks on LLaVA-1.6,
Immune reduces the attack success rate by 57.82% and 16.78% compared to the
base MLLM and state-of-the-art defense strategy, respectively. |
---|---|
DOI: | 10.48550/arxiv.2411.18688 |