MLLM-as-a-Judge for Image Safety without Human Labeling
Image content safety has become a significant challenge with the rise of visual media on online platforms. Meanwhile, in the age of AI-generated content (AIGC), many image generation models are capable of producing harmful content, such as images containing sexual or violent material. Thus, it becom...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Image content safety has become a significant challenge with the rise of
visual media on online platforms. Meanwhile, in the age of AI-generated content
(AIGC), many image generation models are capable of producing harmful content,
such as images containing sexual or violent material. Thus, it becomes crucial
to identify such unsafe images based on established safety rules. Pre-trained
Multimodal Large Language Models (MLLMs) offer potential in this regard, given
their strong pattern recognition abilities. Existing approaches typically
fine-tune MLLMs with human-labeled datasets, which however brings a series of
drawbacks. First, relying on human annotators to label data following intricate
and detailed guidelines is both expensive and labor-intensive. Furthermore,
users of safety judgment systems may need to frequently update safety rules,
making fine-tuning on human-based annotation more challenging. This raises the
research question: Can we detect unsafe images by querying MLLMs in a zero-shot
setting using a predefined safety constitution (a set of safety rules)? Our
research showed that simply querying pre-trained MLLMs does not yield
satisfactory results. This lack of effectiveness stems from factors such as the
subjectivity of safety rules, the complexity of lengthy constitutions, and the
inherent biases in the models. To address these challenges, we propose a
MLLM-based method includes objectifying safety rules, assessing the relevance
between rules and images, making quick judgments based on debiased token
probabilities with logically complete yet simplified precondition chains for
safety rules, and conducting more in-depth reasoning with cascaded
chain-of-thought processes if necessary. Experiment results demonstrate that
our method is highly effective for zero-shot image safety judgment tasks. |
---|---|
DOI: | 10.48550/arxiv.2501.00192 |