Studying human-to-computer bias transference
It is generally agreed that one origin of machine bias is resulting from characteristics within the dataset on which the algorithms are trained, i.e., the data does not warrant a generalized inference. We, however, hypothesize that a different ‘mechanism’ may also be responsible for machine bias, na...
Gespeichert in:
Veröffentlicht in: | AI & society 2023-08, Vol.38 (4), p.1659-1683 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | It is generally agreed that one origin of machine bias is resulting from characteristics within the dataset on which the algorithms are trained, i.e., the data does not warrant a generalized inference. We, however, hypothesize that a different ‘mechanism’ may also be responsible for machine bias, namely that biases may originate from (i) the programmers’ cultural background, including education or line of work, or (ii) the contextual programming environment, including software requirements or developer tools. Combining an experimental and comparative design, we study the effects of cultural and contextual metaphors, and test whether each of these are ‘transferred’ from the programmer to the program, thus constituting a machine bias. Our results show that (i) cultural metaphors influence the programmer’s choices and (ii) contextual metaphors induced through priming can be used to moderate or exacerbate the effects of the cultural metaphors. Our studies are purposely performed with users of varying educational backgrounds and programming skills stretching from novice to proficient. |
---|---|
ISSN: | 0951-5666 1435-5655 |
DOI: | 10.1007/s00146-021-01328-4 |