Toward children-centric AI: a case for a growth model in children-AI interactions

This article advocates for a hermeneutic model for children-AI (age group 7–11 years) interactions in which the desirable purpose of children’s interaction with artificial intelligence (AI) systems is children's growth. The article perceives AI systems with machine-learning components as having...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:AI & society 2024-06, Vol.39 (3), p.1303-1315
1. Verfasser: La Fors, Karolina
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This article advocates for a hermeneutic model for children-AI (age group 7–11 years) interactions in which the desirable purpose of children’s interaction with artificial intelligence (AI) systems is children's growth. The article perceives AI systems with machine-learning components as having a recursive element when interacting with children. They can learn from an encounter with children and incorporate data from interaction, not only from prior programming. Given the purpose of growth and this recursive element of AI, the article argues for distinguishing the interpretation of bias within the artificial intelligence (AI) ethics and responsible AI discourse. Interpreting bias as a preference and distinguishing between positive (pro-diversity) and negative (discriminative) bias is needed as this would serve children's healthy psychological and moral development. The human-centric AI discourse advocates for an alignment of capacities of humans and capabilities of machines by a focus both on the purpose of humans and on the purpose of machines for humans. The emphasis on mitigating negative biases through data protection, AI law, and certain value-sensitive design frameworks demonstrates that the purpose of the machine for humans is prioritized over the purpose of humans. These top–down frameworks often narrow down the purpose of machines to do-no-harm and they miss accounting for the bottom-up views and developmental needs of children. Therefore, applying a growth model for children-AI interactions that incorporates learning from negative AI-mediated biases and amplifying positive ones would positively benefit children’s development and children-centric AI innovation. Consequently, the article explores: What challenges arise from mitigating negative biases and amplifying positive biases in children-AI interactions and how can a growth model address these? To answer this, the article recommends applying a growth model in open AI co-creational spaces with and for children. In such spaces human–machine and human–human value alignment methods can be collectively applied in such a manner that children can (1) become sensitized toward the effects of AI-mediated negative biases on themselves and others; (2) enable children to appropriate and imbue top-down values of diversity, and non-discrimination with their meanings; (3) enforce children’s right to identity and non-discrimination; (4) guide children in developing an inclusive mindset; (5) inform top-down nor
ISSN:0951-5666
1435-5655
DOI:10.1007/s00146-022-01579-9