Assumptions About Algorithms’ Capacity for Discrimination
Although their implementation has inspired optimism in many domains, algorithms can both systematize discrimination and obscure its presence. In seven studies, we test the hypothesis that people instead tend to assume algorithms discriminate less than humans due to beliefs that algorithms tend to be...
Gespeichert in:
Veröffentlicht in: | Personality & social psychology bulletin 2022-04, Vol.48 (4), p.582-595 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Although their implementation has inspired optimism in many domains, algorithms can both systematize discrimination and obscure its presence. In seven studies, we test the hypothesis that people instead tend to assume algorithms discriminate less than humans due to beliefs that algorithms tend to be both more accurate and less emotional evaluators. As a result of these assumptions, people are more interested in being evaluated by an algorithm when they anticipate that discrimination against them is possible. We finally investigate the degree to which information about how algorithms train using data sets consisting of human judgments and decisions change people’s increased preferences for algorithms when they themselves anticipate discrimination. Taken together, these studies indicate that algorithms appear less discriminatory than humans, making people (potentially erroneously) more comfortable with their use. |
---|---|
ISSN: | 0146-1672 1552-7433 |
DOI: | 10.1177/01461672211016187 |