Algorithmic Political Bias Can Reduce Political Polarization

Does algorithmic political bias contribute to an entrenchment and polarization of political positions? Franke ( Philosophy and Technology , 35, 7, 2022 ) argues that it may do so because the bias involves classifications of people as liberals, conservatives, etc., and individuals often conform to th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Philosophy & technology 2022-09, Vol.35 (3), Article 81
1. Verfasser: Peters, Uwe
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Does algorithmic political bias contribute to an entrenchment and polarization of political positions? Franke ( Philosophy and Technology , 35, 7, 2022 ) argues that it may do so because the bias involves classifications of people as liberals, conservatives, etc., and individuals often conform to the ways in which they are classified. I provide a novel example of this phenomenon in human–computer interactions and introduce a social psychological mechanism (what I shall call ‘implied political labeling’) that has been overlooked in this context but should be experimentally explored. Furthermore, while Franke proposes that algorithmic political classifications entrench political identities, I contend that they may often produce the opposite result. They can lead people to change in ways that disconfirm the classifications (thus causing ‘looping effects’). Consequently and counterintuitively, algorithmic political bias can in fact decrease political entrenchment and polarization.
ISSN:2210-5433
2210-5441
DOI:10.1007/s13347-022-00576-6