Fortifying Toxic Speech Detectors Against Veiled Toxicity
Modern toxic speech detectors are incompetent in recognizing disguised offensive language, such as adversarial attacks that deliberately avoid known toxic lexicons, or manifestations of implicit bias. Building a large annotated dataset for such veiled toxicity can be very expensive. In this work, we...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Han, Xiaochuang Tsvetkov, Yulia |
description | Modern toxic speech detectors are incompetent in recognizing disguised
offensive language, such as adversarial attacks that deliberately avoid known
toxic lexicons, or manifestations of implicit bias. Building a large annotated
dataset for such veiled toxicity can be very expensive. In this work, we
propose a framework aimed at fortifying existing toxic speech detectors without
a large labeled corpus of veiled toxicity. Just a handful of probing examples
are used to surface orders of magnitude more disguised offenses. We augment the
toxic speech detector's training data with these discovered offensive examples,
thereby making it more robust to veiled toxicity while preserving its utility
in detecting overt toxicity. |
doi_str_mv | 10.48550/arxiv.2010.03154 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2010_03154</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2010_03154</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-464879f13a19a44a108620f4d581f4c97e0e59fa4013b9083800718a40de402d3</originalsourceid><addsrcrecordid>eNotj8FuwjAQRH3pAdF-AKf6B0LX8Tqxj4gWioTEgajXaEnWYIkS5FgV-XsocBrNaDSaJ8REwRStMfBB8RL-pjncAtDK4Ei4RRdT8EM47WXVXUIjt2fm5iA_OXGTutjL2Z7CqU_yh8OR20crpOFVvHg69vz21LGoFl_V_Dtbb5ar-WydUVFihgXa0nmlSTlCJAW2yMFja6zy2LiSgY3zhKD0zoHVFqBU9uZbRshbPRbvj9n79_ocwy_Fof5nqO8M-gp5-z-0</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Fortifying Toxic Speech Detectors Against Veiled Toxicity</title><source>arXiv.org</source><creator>Han, Xiaochuang ; Tsvetkov, Yulia</creator><creatorcontrib>Han, Xiaochuang ; Tsvetkov, Yulia</creatorcontrib><description>Modern toxic speech detectors are incompetent in recognizing disguised
offensive language, such as adversarial attacks that deliberately avoid known
toxic lexicons, or manifestations of implicit bias. Building a large annotated
dataset for such veiled toxicity can be very expensive. In this work, we
propose a framework aimed at fortifying existing toxic speech detectors without
a large labeled corpus of veiled toxicity. Just a handful of probing examples
are used to surface orders of magnitude more disguised offenses. We augment the
toxic speech detector's training data with these discovered offensive examples,
thereby making it more robust to veiled toxicity while preserving its utility
in detecting overt toxicity.</description><identifier>DOI: 10.48550/arxiv.2010.03154</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2020-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2010.03154$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2010.03154$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Han, Xiaochuang</creatorcontrib><creatorcontrib>Tsvetkov, Yulia</creatorcontrib><title>Fortifying Toxic Speech Detectors Against Veiled Toxicity</title><description>Modern toxic speech detectors are incompetent in recognizing disguised
offensive language, such as adversarial attacks that deliberately avoid known
toxic lexicons, or manifestations of implicit bias. Building a large annotated
dataset for such veiled toxicity can be very expensive. In this work, we
propose a framework aimed at fortifying existing toxic speech detectors without
a large labeled corpus of veiled toxicity. Just a handful of probing examples
are used to surface orders of magnitude more disguised offenses. We augment the
toxic speech detector's training data with these discovered offensive examples,
thereby making it more robust to veiled toxicity while preserving its utility
in detecting overt toxicity.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FuwjAQRH3pAdF-AKf6B0LX8Tqxj4gWioTEgajXaEnWYIkS5FgV-XsocBrNaDSaJ8REwRStMfBB8RL-pjncAtDK4Ei4RRdT8EM47WXVXUIjt2fm5iA_OXGTutjL2Z7CqU_yh8OR20crpOFVvHg69vz21LGoFl_V_Dtbb5ar-WydUVFihgXa0nmlSTlCJAW2yMFja6zy2LiSgY3zhKD0zoHVFqBU9uZbRshbPRbvj9n79_ocwy_Fof5nqO8M-gp5-z-0</recordid><startdate>20201007</startdate><enddate>20201007</enddate><creator>Han, Xiaochuang</creator><creator>Tsvetkov, Yulia</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20201007</creationdate><title>Fortifying Toxic Speech Detectors Against Veiled Toxicity</title><author>Han, Xiaochuang ; Tsvetkov, Yulia</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-464879f13a19a44a108620f4d581f4c97e0e59fa4013b9083800718a40de402d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Han, Xiaochuang</creatorcontrib><creatorcontrib>Tsvetkov, Yulia</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Han, Xiaochuang</au><au>Tsvetkov, Yulia</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Fortifying Toxic Speech Detectors Against Veiled Toxicity</atitle><date>2020-10-07</date><risdate>2020</risdate><abstract>Modern toxic speech detectors are incompetent in recognizing disguised
offensive language, such as adversarial attacks that deliberately avoid known
toxic lexicons, or manifestations of implicit bias. Building a large annotated
dataset for such veiled toxicity can be very expensive. In this work, we
propose a framework aimed at fortifying existing toxic speech detectors without
a large labeled corpus of veiled toxicity. Just a handful of probing examples
are used to surface orders of magnitude more disguised offenses. We augment the
toxic speech detector's training data with these discovered offensive examples,
thereby making it more robust to veiled toxicity while preserving its utility
in detecting overt toxicity.</abstract><doi>10.48550/arxiv.2010.03154</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2010.03154 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2010_03154 |
source | arXiv.org |
subjects | Computer Science - Computation and Language |
title | Fortifying Toxic Speech Detectors Against Veiled Toxicity |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-19T01%3A00%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Fortifying%20Toxic%20Speech%20Detectors%20Against%20Veiled%20Toxicity&rft.au=Han,%20Xiaochuang&rft.date=2020-10-07&rft_id=info:doi/10.48550/arxiv.2010.03154&rft_dat=%3Carxiv_GOX%3E2010_03154%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |