AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting
With the advent and widespread deployment of Multimodal Large Language Models (MLLMs), the imperative to ensure their safety has become increasingly pronounced. However, with the integration of additional modalities, MLLMs are exposed to new vulnerabilities, rendering them prone to structured-based...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the advent and widespread deployment of Multimodal Large Language Models
(MLLMs), the imperative to ensure their safety has become increasingly
pronounced. However, with the integration of additional modalities, MLLMs are
exposed to new vulnerabilities, rendering them prone to structured-based
jailbreak attacks, where semantic content (e.g., "harmful text") has been
injected into the images to mislead MLLMs. In this work, we aim to defend
against such threats. Specifically, we propose \textbf{Ada}ptive
\textbf{Shield} Prompting (\textbf{AdaShield}), which prepends inputs with
defense prompts to defend MLLMs against structure-based jailbreak attacks
without fine-tuning MLLMs or training additional modules (e.g., post-stage
content detector). Initially, we present a manually designed static defense
prompt, which thoroughly examines the image and instruction content step by
step and specifies response methods to malicious queries. Furthermore, we
introduce an adaptive auto-refinement framework, consisting of a target MLLM
and a LLM-based defense prompt generator (Defender). These components
collaboratively and iteratively communicate to generate a defense prompt.
Extensive experiments on the popular structure-based jailbreak attacks and
benign datasets show that our methods can consistently improve MLLMs'
robustness against structure-based jailbreak attacks without compromising the
model's general capabilities evaluated on standard benign tasks. Our code is
available at https://github.com/rain305f/AdaShield. |
---|---|
DOI: | 10.48550/arxiv.2403.09513 |