In-game Toxic Language Detection: Shared Task and Attention Residuals

In-game toxic language becomes the hot potato in the gaming industry and community. There have been several online game toxicity analysis frameworks and models proposed. However, it is still challenging to detect toxicity due to the nature of in-game chat, which has extremely short length. In this p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Jia, Yuanzhe, Wu, Weixuan, Cao, Feiqi, Han, Soyeon Caren
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Jia, Yuanzhe
Wu, Weixuan
Cao, Feiqi
Han, Soyeon Caren
description In-game toxic language becomes the hot potato in the gaming industry and community. There have been several online game toxicity analysis frameworks and models proposed. However, it is still challenging to detect toxicity due to the nature of in-game chat, which has extremely short length. In this paper, we describe how the in-game toxic language shared task has been established using the real-world in-game chat data. In addition, we propose and introduce the model/framework for toxic language token tagging (slot filling) from the in-game chat. The data and code will be released.
doi_str_mv 10.48550/arxiv.2211.05995
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2211_05995</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2211_05995</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-7e2620dde4529bc75a402e974f1d5d8a47b868c971c3698bb6e7840f7d7ef04a3</originalsourceid><addsrcrecordid>eNotz7FOwzAUhWEvDKjwAEz4BRJsx_a12apSoFIkJMge3dg3wYK6KElReXto6fQPRzrSx9iNFKV2xog7HA_pu1RKylIY780lW29yMeCWeLM7pMBrzMMeB-IPNFOY0y7f87d3HCnyBqcPjjny5TxTPk78laYU9_g5XbGL_i90fe6CNY_rZvVc1C9Pm9WyLtCCKYCUVSJG0kb5LoBBLRR50L2MJjrU0DnrggcZKutd11kCp0UPEagXGqsFu_2_PTnarzFtcfxpj5725Kl-AQ6SRMM</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>In-game Toxic Language Detection: Shared Task and Attention Residuals</title><source>arXiv.org</source><creator>Jia, Yuanzhe ; Wu, Weixuan ; Cao, Feiqi ; Han, Soyeon Caren</creator><creatorcontrib>Jia, Yuanzhe ; Wu, Weixuan ; Cao, Feiqi ; Han, Soyeon Caren</creatorcontrib><description>In-game toxic language becomes the hot potato in the gaming industry and community. There have been several online game toxicity analysis frameworks and models proposed. However, it is still challenging to detect toxicity due to the nature of in-game chat, which has extremely short length. In this paper, we describe how the in-game toxic language shared task has been established using the real-world in-game chat data. In addition, we propose and introduce the model/framework for toxic language token tagging (slot filling) from the in-game chat. The data and code will be released.</description><identifier>DOI: 10.48550/arxiv.2211.05995</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2022-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2211.05995$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2211.05995$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Jia, Yuanzhe</creatorcontrib><creatorcontrib>Wu, Weixuan</creatorcontrib><creatorcontrib>Cao, Feiqi</creatorcontrib><creatorcontrib>Han, Soyeon Caren</creatorcontrib><title>In-game Toxic Language Detection: Shared Task and Attention Residuals</title><description>In-game toxic language becomes the hot potato in the gaming industry and community. There have been several online game toxicity analysis frameworks and models proposed. However, it is still challenging to detect toxicity due to the nature of in-game chat, which has extremely short length. In this paper, we describe how the in-game toxic language shared task has been established using the real-world in-game chat data. In addition, we propose and introduce the model/framework for toxic language token tagging (slot filling) from the in-game chat. The data and code will be released.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7FOwzAUhWEvDKjwAEz4BRJsx_a12apSoFIkJMge3dg3wYK6KElReXto6fQPRzrSx9iNFKV2xog7HA_pu1RKylIY780lW29yMeCWeLM7pMBrzMMeB-IPNFOY0y7f87d3HCnyBqcPjjny5TxTPk78laYU9_g5XbGL_i90fe6CNY_rZvVc1C9Pm9WyLtCCKYCUVSJG0kb5LoBBLRR50L2MJjrU0DnrggcZKutd11kCp0UPEagXGqsFu_2_PTnarzFtcfxpj5725Kl-AQ6SRMM</recordid><startdate>20221110</startdate><enddate>20221110</enddate><creator>Jia, Yuanzhe</creator><creator>Wu, Weixuan</creator><creator>Cao, Feiqi</creator><creator>Han, Soyeon Caren</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20221110</creationdate><title>In-game Toxic Language Detection: Shared Task and Attention Residuals</title><author>Jia, Yuanzhe ; Wu, Weixuan ; Cao, Feiqi ; Han, Soyeon Caren</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-7e2620dde4529bc75a402e974f1d5d8a47b868c971c3698bb6e7840f7d7ef04a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Jia, Yuanzhe</creatorcontrib><creatorcontrib>Wu, Weixuan</creatorcontrib><creatorcontrib>Cao, Feiqi</creatorcontrib><creatorcontrib>Han, Soyeon Caren</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jia, Yuanzhe</au><au>Wu, Weixuan</au><au>Cao, Feiqi</au><au>Han, Soyeon Caren</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>In-game Toxic Language Detection: Shared Task and Attention Residuals</atitle><date>2022-11-10</date><risdate>2022</risdate><abstract>In-game toxic language becomes the hot potato in the gaming industry and community. There have been several online game toxicity analysis frameworks and models proposed. However, it is still challenging to detect toxicity due to the nature of in-game chat, which has extremely short length. In this paper, we describe how the in-game toxic language shared task has been established using the real-world in-game chat data. In addition, we propose and introduce the model/framework for toxic language token tagging (slot filling) from the in-game chat. The data and code will be released.</abstract><doi>10.48550/arxiv.2211.05995</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2211.05995
ispartof
issn
language eng
recordid cdi_arxiv_primary_2211_05995
source arXiv.org
subjects Computer Science - Computation and Language
title In-game Toxic Language Detection: Shared Task and Attention Residuals
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T19%3A43%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=In-game%20Toxic%20Language%20Detection:%20Shared%20Task%20and%20Attention%20Residuals&rft.au=Jia,%20Yuanzhe&rft.date=2022-11-10&rft_id=info:doi/10.48550/arxiv.2211.05995&rft_dat=%3Carxiv_GOX%3E2211_05995%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true