A Critical Reflection on the Use of Toxicity Detection Algorithms in Proactive Content Moderation Systems
Toxicity detection algorithms, originally designed with reactive content moderation in mind, are increasingly being deployed into proactive end-user interventions to moderate content. Through a socio-technical lens and focusing on contexts in which they are applied, we explore the use of these algor...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Toxicity detection algorithms, originally designed with reactive content
moderation in mind, are increasingly being deployed into proactive end-user
interventions to moderate content. Through a socio-technical lens and focusing
on contexts in which they are applied, we explore the use of these algorithms
in proactive moderation systems. Placing a toxicity detection algorithm in an
imagined virtual mobile keyboard, we critically explore how such algorithms
could be used to proactively reduce the sending of toxic content. We present
findings from design workshops conducted with four distinct stakeholder groups
and find concerns around how contextual complexities may exasperate
inequalities around content moderation processes. Whilst only specific user
groups are likely to directly benefit from these interventions, we highlight
the potential for other groups to misuse them to circumvent detection, validate
and gamify hate, and manipulate algorithmic models to exasperate harm. |
---|---|
DOI: | 10.48550/arxiv.2401.10629 |