Detecting Natural Language Biases with Prompt-based Learning
In this project, we want to explore the newly emerging field of prompt engineering and apply it to the downstream task of detecting LM biases. More concretely, we explore how to design prompts that can indicate 4 different types of biases: (1) gender, (2) race, (3) sexual orientation, and (4) religi...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this project, we want to explore the newly emerging field of prompt
engineering and apply it to the downstream task of detecting LM biases. More
concretely, we explore how to design prompts that can indicate 4 different
types of biases: (1) gender, (2) race, (3) sexual orientation, and (4)
religion-based. Within our project, we experiment with different manually
crafted prompts that can draw out the subtle biases that may be present in the
language model. We apply these prompts to multiple variations of popular and
well-recognized models: BERT, RoBERTa, and T5 to evaluate their biases. We
provide a comparative analysis of these models and assess them using a two-fold
method: use human judgment to decide whether model predictions are biased and
utilize model-level judgment (through further prompts) to understand if a model
can self-diagnose the biases of its own prediction. |
---|---|
DOI: | 10.48550/arxiv.2309.05227 |