On Releasing Annotator-Level Labels and Information in Datasets
A common practice in building NLP datasets, especially using crowd-sourced annotations, involves obtaining multiple annotator judgements on the same data instances, which are then flattened to produce a single "ground truth" label or score, through majority voting, averaging, or adjudicati...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A common practice in building NLP datasets, especially using crowd-sourced
annotations, involves obtaining multiple annotator judgements on the same data
instances, which are then flattened to produce a single "ground truth" label or
score, through majority voting, averaging, or adjudication. While these
approaches may be appropriate in certain annotation tasks, such aggregations
overlook the socially constructed nature of human perceptions that annotations
for relatively more subjective tasks are meant to capture. In particular,
systematic disagreements between annotators owing to their socio-cultural
backgrounds and/or lived experiences are often obfuscated through such
aggregations. In this paper, we empirically demonstrate that label aggregation
may introduce representational biases of individual and group perspectives.
Based on this finding, we propose a set of recommendations for increased
utility and transparency of datasets for downstream use cases. |
---|---|
DOI: | 10.48550/arxiv.2110.05699 |