Muslim-Violence Bias Persists in Debiased GPT Models
Abid et al. (2021) showed a tendency in GPT-3 to generate mostly violent completions when prompted about Muslims, compared with other religions. Two pre-registered replication attempts found few violent completions and only a weak anti-Muslim bias in the more recent InstructGPT, fine-tuned to elimin...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Abid et al. (2021) showed a tendency in GPT-3 to generate mostly violent
completions when prompted about Muslims, compared with other religions. Two
pre-registered replication attempts found few violent completions and only a
weak anti-Muslim bias in the more recent InstructGPT, fine-tuned to eliminate
biased and toxic outputs. However, more pre-registered experiments showed that
using common names associated with the religions in prompts increases
several-fold the rate of violent completions, revealing a significant
second-order anti-Muslim bias. ChatGPT showed a bias many times stronger
regardless of prompt format, suggesting that the effects of debiasing were
reduced with continued model development. Our content analysis revealed
religion-specific themes containing offensive stereotypes across all
experiments. Our results show the need for continual de-biasing of models in
ways that address both explicit and higher-order associations. |
---|---|
DOI: | 10.48550/arxiv.2310.18368 |