RapGuard: Safeguarding Multimodal Large Language Models via Rationale-aware Defensive Prompting
While Multimodal Large Language Models (MLLMs) have made remarkable progress in vision-language reasoning, they are also more susceptible to producing harmful content compared to models that focus solely on text. Existing defensive prompting techniques rely on a static, unified safety guideline that...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | While Multimodal Large Language Models (MLLMs) have made remarkable progress
in vision-language reasoning, they are also more susceptible to producing
harmful content compared to models that focus solely on text. Existing
defensive prompting techniques rely on a static, unified safety guideline that
fails to account for the specific risks inherent in different multimodal
contexts. To address these limitations, we propose RapGuard, a novel framework
that uses multimodal chain-of-thought reasoning to dynamically generate
scenario-specific safety prompts. RapGuard enhances safety by adapting its
prompts to the unique risks of each input, effectively mitigating harmful
outputs while maintaining high performance on benign tasks. Our experimental
results across multiple MLLM benchmarks demonstrate that RapGuard achieves
state-of-the-art safety performance, significantly reducing harmful content
without degrading the quality of responses. |
---|---|
DOI: | 10.48550/arxiv.2412.18826 |