Gender bias in visual generative artificial intelligence systems and the socialization of AI
Substantial research over the last ten years has indicated that many generative artificial intelligence systems (“GAI”) have the potential to produce biased results, particularly with respect to gender. This potential for bias has grown progressively more important in recent years as GAI has become...
Gespeichert in:
Veröffentlicht in: | AI & society 2024-11 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Substantial research over the last ten years has indicated that many generative artificial intelligence systems (“GAI”) have the potential to produce biased results, particularly with respect to gender. This potential for bias has grown progressively more important in recent years as GAI has become increasingly integrated in multiple critical sectors, such as healthcare, consumer lending, and employment. While much of the study of gender bias in popular GAI systems is focused on text-based GAI such as OpenAI’s ChatGPT and Google’s Gemini (formerly Bard), this article describes the results of a confirmatory experiment of gender bias in visual GAI systems. The authors argue that the potential for gender bias in visual GAI systems is potentially more troubling than bias in textual GAI because of the superior memorability of images and the capacity for emotional communication that images represent. They go on to offer four potential approaches to gender bias in visual GAI based on the roles visual GAI could play in modern society. The article concludes with a discussion of how dominant societal values could influence a choice between those four potential approaches to gender bias in visual GAI and some suggestions for further research. |
---|---|
ISSN: | 0951-5666 1435-5655 |
DOI: | 10.1007/s00146-024-02129-1 |