Nationality Bias in Text Generation
Little attention is placed on analyzing nationality bias in language models, especially when nationality is highly used as a factor in increasing the performance of social NLP models. This paper examines how a text generation model, GPT-2, accentuates pre-existing societal biases about country-based...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Little attention is placed on analyzing nationality bias in language models,
especially when nationality is highly used as a factor in increasing the
performance of social NLP models. This paper examines how a text generation
model, GPT-2, accentuates pre-existing societal biases about country-based
demonyms. We generate stories using GPT-2 for various nationalities and use
sensitivity analysis to explore how the number of internet users and the
country's economic status impacts the sentiment of the stories. To reduce the
propagation of biases through large language models (LLM), we explore the
debiasing method of adversarial triggering. Our results show that GPT-2
demonstrates significant bias against countries with lower internet users, and
adversarial triggering effectively reduces the same. |
---|---|
DOI: | 10.48550/arxiv.2302.02463 |