Toward Annotator Group Bias in Crowdsourcing
Crowdsourcing has emerged as a popular approach for collecting annotated data to train supervised machine learning models. However, annotator bias can lead to defective annotations. Though there are a few works investigating individual annotator bias, the group effects in annotators are largely over...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Crowdsourcing has emerged as a popular approach for collecting annotated data
to train supervised machine learning models. However, annotator bias can lead
to defective annotations. Though there are a few works investigating individual
annotator bias, the group effects in annotators are largely overlooked. In this
work, we reveal that annotators within the same demographic group tend to show
consistent group bias in annotation tasks and thus we conduct an initial study
on annotator group bias. We first empirically verify the existence of annotator
group bias in various real-world crowdsourcing datasets. Then, we develop a
novel probabilistic graphical framework GroupAnno to capture annotator group
bias with a new extended Expectation Maximization (EM) training algorithm. We
conduct experiments on both synthetic and real-world datasets. Experimental
results demonstrate the effectiveness of our model in modeling annotator group
bias in label aggregation and model learning over competitive baselines. |
---|---|
DOI: | 10.48550/arxiv.2110.08038 |