Aligning with Whom? Large Language Models Have Gender and Racial Biases in Subjective NLP Tasks
Human perception of language depends on personal backgrounds like gender and ethnicity. While existing studies have shown that large language models (LLMs) hold values that are closer to certain societal groups, it is unclear whether their prediction behaviors on subjective NLP tasks also exhibit a...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Human perception of language depends on personal backgrounds like gender and
ethnicity. While existing studies have shown that large language models (LLMs)
hold values that are closer to certain societal groups, it is unclear whether
their prediction behaviors on subjective NLP tasks also exhibit a similar bias.
In this study, leveraging the POPQUORN dataset which contains annotations of
diverse demographic backgrounds, we conduct a series of experiments on four
popular LLMs to investigate their capability to understand group differences
and potential biases in their predictions for politeness and offensiveness. We
find that for both tasks, model predictions are closer to the labels from White
and female participants. We further explore prompting with the target
demographic labels and show that including the target demographic in the prompt
actually worsens the model's performance. More specifically, when being
prompted to respond from the perspective of "Black" and "Asian" individuals,
models show lower performance in predicting both overall scores as well as the
scores from corresponding groups. Our results suggest that LLMs hold gender and
racial biases for subjective NLP tasks and that demographic-infused prompts
alone may be insufficient to mitigate such effects. Code and data are available
at https://github.com/Jiaxin-Pei/LLM-Group-Bias. |
---|---|
DOI: | 10.48550/arxiv.2311.09730 |