Reinforcement Guided Multi-Task Learning Framework for Low-Resource Stereotype Detection

As large Pre-trained Language Models (PLMs) trained on large amounts of data in an unsupervised manner become more ubiquitous, identifying various types of bias in the text has come into sharp focus. Existing "Stereotype Detection" datasets mainly adopt a diagnostic approach toward large P...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Pujari, Rajkumar, Oveson, Erik, Kulkarni, Priyanka, Nouri, Elnaz
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Pujari, Rajkumar
Oveson, Erik
Kulkarni, Priyanka
Nouri, Elnaz
description As large Pre-trained Language Models (PLMs) trained on large amounts of data in an unsupervised manner become more ubiquitous, identifying various types of bias in the text has come into sharp focus. Existing "Stereotype Detection" datasets mainly adopt a diagnostic approach toward large PLMs. Blodgett et. al (2021a) show that there are significant reliability issues with the existing benchmark datasets. Annotating a reliable dataset requires a precise understanding of the subtle nuances of how stereotypes manifest in text. In this paper, we annotate a focused evaluation set for "Stereotype Detection" that addresses those pitfalls by de-constructing various ways in which stereotypes manifest in text. Further, we present a multi-task model that leverages the abundance of data-rich neighboring tasks such as hate speech detection, offensive language detection, misogyny detection, etc., to improve the empirical performance on "Stereotype Detection". We then propose a reinforcement-learning agent that guides the multi-task learning model by learning to identify the training examples from the neighboring tasks that help the target task the most. We show that the proposed models achieve significant empirical gains over existing baselines on all the tasks.
doi_str_mv 10.48550/arxiv.2203.14349
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2203_14349</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2203_14349</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-eba818b0c561176cef3d1906d1d310c9b580101da026df764d23695b016bdaa63</originalsourceid><addsrcrecordid>eNotz7FOwzAUBVAvDKjwAUz4Bxz84sRJRlRoQUqFVDKwRc_xC7LaxJXjUPr3lMJ0l3uvdBi7A5lkZZ7LBwzf7itJU6kSyFRWXbOPLbmx96GjgcbI17OzZPlm3kcnGpx2vCYMoxs_-SrgQEcfdvxc57U_ii1Nfj4v-XukQD6eDsSfKFIXnR9v2FWP-4lu_3PBmtVzs3wR9dv6dflYC9RFJchgCaWRXa4BCt1RryxUUluwCmRXmbyUIMGiTLXtC53ZVOkqNxK0sYhaLdj93-2F1h6CGzCc2l9ieyGqHzpzTJo</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Reinforcement Guided Multi-Task Learning Framework for Low-Resource Stereotype Detection</title><source>arXiv.org</source><creator>Pujari, Rajkumar ; Oveson, Erik ; Kulkarni, Priyanka ; Nouri, Elnaz</creator><creatorcontrib>Pujari, Rajkumar ; Oveson, Erik ; Kulkarni, Priyanka ; Nouri, Elnaz</creatorcontrib><description>As large Pre-trained Language Models (PLMs) trained on large amounts of data in an unsupervised manner become more ubiquitous, identifying various types of bias in the text has come into sharp focus. Existing "Stereotype Detection" datasets mainly adopt a diagnostic approach toward large PLMs. Blodgett et. al (2021a) show that there are significant reliability issues with the existing benchmark datasets. Annotating a reliable dataset requires a precise understanding of the subtle nuances of how stereotypes manifest in text. In this paper, we annotate a focused evaluation set for "Stereotype Detection" that addresses those pitfalls by de-constructing various ways in which stereotypes manifest in text. Further, we present a multi-task model that leverages the abundance of data-rich neighboring tasks such as hate speech detection, offensive language detection, misogyny detection, etc., to improve the empirical performance on "Stereotype Detection". We then propose a reinforcement-learning agent that guides the multi-task learning model by learning to identify the training examples from the neighboring tasks that help the target task the most. We show that the proposed models achieve significant empirical gains over existing baselines on all the tasks.</description><identifier>DOI: 10.48550/arxiv.2203.14349</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2022-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,782,887</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2203.14349$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2203.14349$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Pujari, Rajkumar</creatorcontrib><creatorcontrib>Oveson, Erik</creatorcontrib><creatorcontrib>Kulkarni, Priyanka</creatorcontrib><creatorcontrib>Nouri, Elnaz</creatorcontrib><title>Reinforcement Guided Multi-Task Learning Framework for Low-Resource Stereotype Detection</title><description>As large Pre-trained Language Models (PLMs) trained on large amounts of data in an unsupervised manner become more ubiquitous, identifying various types of bias in the text has come into sharp focus. Existing "Stereotype Detection" datasets mainly adopt a diagnostic approach toward large PLMs. Blodgett et. al (2021a) show that there are significant reliability issues with the existing benchmark datasets. Annotating a reliable dataset requires a precise understanding of the subtle nuances of how stereotypes manifest in text. In this paper, we annotate a focused evaluation set for "Stereotype Detection" that addresses those pitfalls by de-constructing various ways in which stereotypes manifest in text. Further, we present a multi-task model that leverages the abundance of data-rich neighboring tasks such as hate speech detection, offensive language detection, misogyny detection, etc., to improve the empirical performance on "Stereotype Detection". We then propose a reinforcement-learning agent that guides the multi-task learning model by learning to identify the training examples from the neighboring tasks that help the target task the most. We show that the proposed models achieve significant empirical gains over existing baselines on all the tasks.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7FOwzAUBVAvDKjwAUz4Bxz84sRJRlRoQUqFVDKwRc_xC7LaxJXjUPr3lMJ0l3uvdBi7A5lkZZ7LBwzf7itJU6kSyFRWXbOPLbmx96GjgcbI17OzZPlm3kcnGpx2vCYMoxs_-SrgQEcfdvxc57U_ii1Nfj4v-XukQD6eDsSfKFIXnR9v2FWP-4lu_3PBmtVzs3wR9dv6dflYC9RFJchgCaWRXa4BCt1RryxUUluwCmRXmbyUIMGiTLXtC53ZVOkqNxK0sYhaLdj93-2F1h6CGzCc2l9ieyGqHzpzTJo</recordid><startdate>20220327</startdate><enddate>20220327</enddate><creator>Pujari, Rajkumar</creator><creator>Oveson, Erik</creator><creator>Kulkarni, Priyanka</creator><creator>Nouri, Elnaz</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220327</creationdate><title>Reinforcement Guided Multi-Task Learning Framework for Low-Resource Stereotype Detection</title><author>Pujari, Rajkumar ; Oveson, Erik ; Kulkarni, Priyanka ; Nouri, Elnaz</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-eba818b0c561176cef3d1906d1d310c9b580101da026df764d23695b016bdaa63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Pujari, Rajkumar</creatorcontrib><creatorcontrib>Oveson, Erik</creatorcontrib><creatorcontrib>Kulkarni, Priyanka</creatorcontrib><creatorcontrib>Nouri, Elnaz</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Pujari, Rajkumar</au><au>Oveson, Erik</au><au>Kulkarni, Priyanka</au><au>Nouri, Elnaz</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Reinforcement Guided Multi-Task Learning Framework for Low-Resource Stereotype Detection</atitle><date>2022-03-27</date><risdate>2022</risdate><abstract>As large Pre-trained Language Models (PLMs) trained on large amounts of data in an unsupervised manner become more ubiquitous, identifying various types of bias in the text has come into sharp focus. Existing "Stereotype Detection" datasets mainly adopt a diagnostic approach toward large PLMs. Blodgett et. al (2021a) show that there are significant reliability issues with the existing benchmark datasets. Annotating a reliable dataset requires a precise understanding of the subtle nuances of how stereotypes manifest in text. In this paper, we annotate a focused evaluation set for "Stereotype Detection" that addresses those pitfalls by de-constructing various ways in which stereotypes manifest in text. Further, we present a multi-task model that leverages the abundance of data-rich neighboring tasks such as hate speech detection, offensive language detection, misogyny detection, etc., to improve the empirical performance on "Stereotype Detection". We then propose a reinforcement-learning agent that guides the multi-task learning model by learning to identify the training examples from the neighboring tasks that help the target task the most. We show that the proposed models achieve significant empirical gains over existing baselines on all the tasks.</abstract><doi>10.48550/arxiv.2203.14349</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2203.14349
ispartof
issn
language eng
recordid cdi_arxiv_primary_2203_14349
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
title Reinforcement Guided Multi-Task Learning Framework for Low-Resource Stereotype Detection
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-05T02%3A14%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Reinforcement%20Guided%20Multi-Task%20Learning%20Framework%20for%20Low-Resource%20Stereotype%20Detection&rft.au=Pujari,%20Rajkumar&rft.date=2022-03-27&rft_id=info:doi/10.48550/arxiv.2203.14349&rft_dat=%3Carxiv_GOX%3E2203_14349%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true