Reinforcement Guided Multi-Task Learning Framework for Low-Resource Stereotype Detection
As large Pre-trained Language Models (PLMs) trained on large amounts of data in an unsupervised manner become more ubiquitous, identifying various types of bias in the text has come into sharp focus. Existing "Stereotype Detection" datasets mainly adopt a diagnostic approach toward large P...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As large Pre-trained Language Models (PLMs) trained on large amounts of data
in an unsupervised manner become more ubiquitous, identifying various types of
bias in the text has come into sharp focus. Existing "Stereotype Detection"
datasets mainly adopt a diagnostic approach toward large PLMs. Blodgett et. al
(2021a) show that there are significant reliability issues with the existing
benchmark datasets. Annotating a reliable dataset requires a precise
understanding of the subtle nuances of how stereotypes manifest in text. In
this paper, we annotate a focused evaluation set for "Stereotype Detection"
that addresses those pitfalls by de-constructing various ways in which
stereotypes manifest in text. Further, we present a multi-task model that
leverages the abundance of data-rich neighboring tasks such as hate speech
detection, offensive language detection, misogyny detection, etc., to improve
the empirical performance on "Stereotype Detection". We then propose a
reinforcement-learning agent that guides the multi-task learning model by
learning to identify the training examples from the neighboring tasks that help
the target task the most. We show that the proposed models achieve significant
empirical gains over existing baselines on all the tasks. |
---|---|
DOI: | 10.48550/arxiv.2203.14349 |