Are Large Language Models Really Bias-Free? Jailbreak Prompts for Assessing Adversarial Robustness to Bias Elicitation
Large Language Models (LLMs) have revolutionized artificial intelligence, demonstrating remarkable computational power and linguistic capabilities. However, these models are inherently prone to various biases stemming from their training data. These include selection, linguistic, and confirmation bi...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) have revolutionized artificial intelligence,
demonstrating remarkable computational power and linguistic capabilities.
However, these models are inherently prone to various biases stemming from
their training data. These include selection, linguistic, and confirmation
biases, along with common stereotypes related to gender, ethnicity, sexual
orientation, religion, socioeconomic status, disability, and age. This study
explores the presence of these biases within the responses given by the most
recent LLMs, analyzing the impact on their fairness and reliability. We also
investigate how known prompt engineering techniques can be exploited to
effectively reveal hidden biases of LLMs, testing their adversarial robustness
against jailbreak prompts specially crafted for bias elicitation. Extensive
experiments are conducted using the most widespread LLMs at different scales,
confirming that LLMs can still be manipulated to produce biased or
inappropriate responses, despite their advanced capabilities and sophisticated
alignment processes. Our findings underscore the importance of enhancing
mitigation techniques to address these safety issues, toward a more sustainable
and inclusive artificial intelligence. |
---|---|
DOI: | 10.48550/arxiv.2407.08441 |