"I Got Flagged for Supposed Bullying, Even Though It Was in Response to Someone Harassing Me About My Disability.": A Study of Blind TikTokers' Content Moderation Experiences

The Human-Computer Interaction (HCI) community has consistently focused on the experiences of users moderated by social media platforms. Recently, scholars have noticed that moderation practices could perpetuate biases, resulting in the marginalization of user groups undergoing moderation. However,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-01
Hauptverfasser: Lyu, Yao, Cai, Jie, Callis, Anisa, Cotter, Kelley, Carroll, John M
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Lyu, Yao
Cai, Jie
Callis, Anisa
Cotter, Kelley
Carroll, John M
description The Human-Computer Interaction (HCI) community has consistently focused on the experiences of users moderated by social media platforms. Recently, scholars have noticed that moderation practices could perpetuate biases, resulting in the marginalization of user groups undergoing moderation. However, most studies have primarily addressed marginalization related to issues such as racism or sexism, with little attention given to the experiences of people with disabilities. In this paper, we present a study on the moderation experiences of blind users on TikTok, also known as "BlindToker," to address this gap. We conducted semi-structured interviews with 20 BlindTokers and used thematic analysis to analyze the data. Two main themes emerged: BlindTokers' situated content moderation experiences and their reactions to content moderation. We reported on the lack of accessibility on TikTok's platform, contributing to the moderation and marginalization of BlindTokers. Additionally, we discovered instances of harassment from trolls that prompted BlindTokers to respond with harsh language, triggering further moderation. We discussed these findings in the context of the literature on moderation, marginalization, and transformative justice, seeking solutions to address such issues.
doi_str_mv 10.48550/arxiv.2401.11663
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2401_11663</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2917701154</sourcerecordid><originalsourceid>FETCH-LOGICAL-a953-4a7cb3284bc264edaa8b4703d4214e34d2a2038689a0f5cf518df0f8a3b04ddc3</originalsourceid><addsrcrecordid>eNotkMtKw0AYhYMgWNQHcOWPLtzYOrdc6q7WqgVFsAGXYZL5U6eNM3FmUsxL-YzGy-pw4JzD4YuiE0omIotjciXdp95NmCB0QmmS8L1oxDin40wwdhAde78hhLAkZXHMR9HX2RLubYC7Rq7XqKC2DlZd21o_mJuuaXpt1pew2KGB_M126zdYBniVHrSBF_StNR4hWFjZd7QG4UE66f1QgieEWWm7AE893GovS93o0E_OrmEGq9CpHmwNN402CnK9ze0Wnb-AuTUBzVCyCp0M2hpYfLboNJoK_VG0X8vG4_G_Hkb53SKfP4wfn--X89njWE5jPhYyrUrOMlFWLBGopMxKkRKuBKMCuVBMMsKzJJtKUsdVHdNM1aTOJC-JUKrih9Hp3-wvzKJ1-l26vviBWvxCHRLnf4nW2Y8OfSg2tnNm-FSwKU1TQmks-DeYjnqQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2917701154</pqid></control><display><type>article</type><title>"I Got Flagged for Supposed Bullying, Even Though It Was in Response to Someone Harassing Me About My Disability.": A Study of Blind TikTokers' Content Moderation Experiences</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Lyu, Yao ; Cai, Jie ; Callis, Anisa ; Cotter, Kelley ; Carroll, John M</creator><creatorcontrib>Lyu, Yao ; Cai, Jie ; Callis, Anisa ; Cotter, Kelley ; Carroll, John M</creatorcontrib><description>The Human-Computer Interaction (HCI) community has consistently focused on the experiences of users moderated by social media platforms. Recently, scholars have noticed that moderation practices could perpetuate biases, resulting in the marginalization of user groups undergoing moderation. However, most studies have primarily addressed marginalization related to issues such as racism or sexism, with little attention given to the experiences of people with disabilities. In this paper, we present a study on the moderation experiences of blind users on TikTok, also known as "BlindToker," to address this gap. We conducted semi-structured interviews with 20 BlindTokers and used thematic analysis to analyze the data. Two main themes emerged: BlindTokers' situated content moderation experiences and their reactions to content moderation. We reported on the lack of accessibility on TikTok's platform, contributing to the moderation and marginalization of BlindTokers. Additionally, we discovered instances of harassment from trolls that prompted BlindTokers to respond with harsh language, triggering further moderation. We discussed these findings in the context of the literature on moderation, marginalization, and transformative justice, seeking solutions to address such issues.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2401.11663</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer Science - Human-Computer Interaction ; Content management ; Human-computer interface ; Social exclusion ; User groups</subject><ispartof>arXiv.org, 2024-01</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27924</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2401.11663$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1145/3613904.3642148$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Lyu, Yao</creatorcontrib><creatorcontrib>Cai, Jie</creatorcontrib><creatorcontrib>Callis, Anisa</creatorcontrib><creatorcontrib>Cotter, Kelley</creatorcontrib><creatorcontrib>Carroll, John M</creatorcontrib><title>"I Got Flagged for Supposed Bullying, Even Though It Was in Response to Someone Harassing Me About My Disability.": A Study of Blind TikTokers' Content Moderation Experiences</title><title>arXiv.org</title><description>The Human-Computer Interaction (HCI) community has consistently focused on the experiences of users moderated by social media platforms. Recently, scholars have noticed that moderation practices could perpetuate biases, resulting in the marginalization of user groups undergoing moderation. However, most studies have primarily addressed marginalization related to issues such as racism or sexism, with little attention given to the experiences of people with disabilities. In this paper, we present a study on the moderation experiences of blind users on TikTok, also known as "BlindToker," to address this gap. We conducted semi-structured interviews with 20 BlindTokers and used thematic analysis to analyze the data. Two main themes emerged: BlindTokers' situated content moderation experiences and their reactions to content moderation. We reported on the lack of accessibility on TikTok's platform, contributing to the moderation and marginalization of BlindTokers. Additionally, we discovered instances of harassment from trolls that prompted BlindTokers to respond with harsh language, triggering further moderation. We discussed these findings in the context of the literature on moderation, marginalization, and transformative justice, seeking solutions to address such issues.</description><subject>Computer Science - Human-Computer Interaction</subject><subject>Content management</subject><subject>Human-computer interface</subject><subject>Social exclusion</subject><subject>User groups</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotkMtKw0AYhYMgWNQHcOWPLtzYOrdc6q7WqgVFsAGXYZL5U6eNM3FmUsxL-YzGy-pw4JzD4YuiE0omIotjciXdp95NmCB0QmmS8L1oxDin40wwdhAde78hhLAkZXHMR9HX2RLubYC7Rq7XqKC2DlZd21o_mJuuaXpt1pew2KGB_M126zdYBniVHrSBF_StNR4hWFjZd7QG4UE66f1QgieEWWm7AE893GovS93o0E_OrmEGq9CpHmwNN402CnK9ze0Wnb-AuTUBzVCyCp0M2hpYfLboNJoK_VG0X8vG4_G_Hkb53SKfP4wfn--X89njWE5jPhYyrUrOMlFWLBGopMxKkRKuBKMCuVBMMsKzJJtKUsdVHdNM1aTOJC-JUKrih9Hp3-wvzKJ1-l26vviBWvxCHRLnf4nW2Y8OfSg2tnNm-FSwKU1TQmks-DeYjnqQ</recordid><startdate>20240122</startdate><enddate>20240122</enddate><creator>Lyu, Yao</creator><creator>Cai, Jie</creator><creator>Callis, Anisa</creator><creator>Cotter, Kelley</creator><creator>Carroll, John M</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240122</creationdate><title>"I Got Flagged for Supposed Bullying, Even Though It Was in Response to Someone Harassing Me About My Disability.": A Study of Blind TikTokers' Content Moderation Experiences</title><author>Lyu, Yao ; Cai, Jie ; Callis, Anisa ; Cotter, Kelley ; Carroll, John M</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a953-4a7cb3284bc264edaa8b4703d4214e34d2a2038689a0f5cf518df0f8a3b04ddc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Human-Computer Interaction</topic><topic>Content management</topic><topic>Human-computer interface</topic><topic>Social exclusion</topic><topic>User groups</topic><toplevel>online_resources</toplevel><creatorcontrib>Lyu, Yao</creatorcontrib><creatorcontrib>Cai, Jie</creatorcontrib><creatorcontrib>Callis, Anisa</creatorcontrib><creatorcontrib>Cotter, Kelley</creatorcontrib><creatorcontrib>Carroll, John M</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lyu, Yao</au><au>Cai, Jie</au><au>Callis, Anisa</au><au>Cotter, Kelley</au><au>Carroll, John M</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>"I Got Flagged for Supposed Bullying, Even Though It Was in Response to Someone Harassing Me About My Disability.": A Study of Blind TikTokers' Content Moderation Experiences</atitle><jtitle>arXiv.org</jtitle><date>2024-01-22</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>The Human-Computer Interaction (HCI) community has consistently focused on the experiences of users moderated by social media platforms. Recently, scholars have noticed that moderation practices could perpetuate biases, resulting in the marginalization of user groups undergoing moderation. However, most studies have primarily addressed marginalization related to issues such as racism or sexism, with little attention given to the experiences of people with disabilities. In this paper, we present a study on the moderation experiences of blind users on TikTok, also known as "BlindToker," to address this gap. We conducted semi-structured interviews with 20 BlindTokers and used thematic analysis to analyze the data. Two main themes emerged: BlindTokers' situated content moderation experiences and their reactions to content moderation. We reported on the lack of accessibility on TikTok's platform, contributing to the moderation and marginalization of BlindTokers. Additionally, we discovered instances of harassment from trolls that prompted BlindTokers to respond with harsh language, triggering further moderation. We discussed these findings in the context of the literature on moderation, marginalization, and transformative justice, seeking solutions to address such issues.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2401.11663</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-01
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2401_11663
source arXiv.org; Free E- Journals
subjects Computer Science - Human-Computer Interaction
Content management
Human-computer interface
Social exclusion
User groups
title "I Got Flagged for Supposed Bullying, Even Though It Was in Response to Someone Harassing Me About My Disability.": A Study of Blind TikTokers' Content Moderation Experiences
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T10%3A11%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=%22I%20Got%20Flagged%20for%20Supposed%20Bullying,%20Even%20Though%20It%20Was%20in%20Response%20to%20Someone%20Harassing%20Me%20About%20My%20Disability.%22:%20A%20Study%20of%20Blind%20TikTokers'%20Content%20Moderation%20Experiences&rft.jtitle=arXiv.org&rft.au=Lyu,%20Yao&rft.date=2024-01-22&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2401.11663&rft_dat=%3Cproquest_arxiv%3E2917701154%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2917701154&rft_id=info:pmid/&rfr_iscdi=true