REDUCING HATE ONLINE: THE MYTH OF COLORBLIND CONTENT POLICY BY ÁNGEL DÍAZ
Focusing on large social media platforms like Facebook/Meta, Twitter, and YouTube, he points out defects in the dominant approach to addressing racism on platforms like these, namely content moderation performed under colorblind rules,4 in which a moderator reviews texts for objectionable material,...
Gespeichert in:
Veröffentlicht in: | Boston University law review 2023-12, Vol.103 (7), p.1985-1999 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Focusing on large social media platforms like Facebook/Meta, Twitter, and YouTube, he points out defects in the dominant approach to addressing racism on platforms like these, namely content moderation performed under colorblind rules,4 in which a moderator reviews texts for objectionable material, deletes it, and warns the sender to desist from posting similar passages in the future.5 Diaz's article is one of the first to apply critical race theory in this area.6 His conclusion is straightforward: content moderation will fail unless it takes into account the history and logic of racism.7 Proceeding, as it currently does, in colorblind terms-banning, for example, any imminent threat-will merely magnify the advantages attendant to whiteness by making them appear natural and inevitable.8 But colorblind moderation does not merely shield white supremacists. Minorities who speak out against oppression may easily find themselves banned from a favorite site,9 especially if they do not speak the King's English, use terms like "goddamn" or "racist," or speak of wanting to bring down the current social order.10 Majority-group users, however, who disparage minorities via code words or circumlocution (as lazy, undeserving, having a poor work ethic, or unAmerican, for example) will pass muster.11 Moderators will deem their speech mere humor or political commentary.12 For Diaz, these flaws are systemic, not products of the occasional reviewer who is asleep at the switch, overworked, or secretly in league with white supremacy.13 I. Virtues of the Diaz Article Diaz's article illustrates the need for color-conscious content moderation not merely by applying social theory14 but by a series of examples drawn from the world of online content review.15 He shows the failures of the current approach are not singular but systemic and products of the setting in which online communications take place, especially on large sites like Facebook, Twitter, TikTok, and Reddit.16 As such, better training of content moderators is unlikely to improve matters. Colorblind monitoring will continually overlook racism or even veiled threats, while suppressing indignant counterspeech by minorities under the guise of protecting civility and public safety.17 Some of the smaller, more raw sites do very little or no content monitoring,18 and one of the largest, Twitter, has cut back on content monitoring out of concern for its high cost.19 Diaz suggests a systemic approach can reduce the kinds of harm |
---|---|
ISSN: | 0006-8047 |