Navigating artificial intelligence in care homes: Competing stakeholder views of trust and logics of care

The COVID-19 pandemic shed light on systemic issues plaguing care (nursing) homes, from staff shortages to substandard healthcare. Artificial Intelligence (AI) technologies, including robots and chatbots, have been proposed as solutions to such issues. Yet, socio-ethical concerns about the implicati...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Social science & medicine (1982) 2024-10, Vol.358, p.117187, Article 117187
Hauptverfasser: Neves, Barbara Barbosa, Omori, Maho, Petersen, Alan, Vered, Mor, Carter, Adrian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The COVID-19 pandemic shed light on systemic issues plaguing care (nursing) homes, from staff shortages to substandard healthcare. Artificial Intelligence (AI) technologies, including robots and chatbots, have been proposed as solutions to such issues. Yet, socio-ethical concerns about the implications of AI for health and care practices have also been growing among researchers and practitioners. At a time of AI promise and concern, it is critical to understand how those who develop and implement these technologies perceive their use and impact in care homes. Combining a sociological approach to trust with Annemarie Mol's logic of care and Jeanette Pol's concept of fitting, we draw on 18 semi-structured interviews with care staff, advocates, and AI developers to explore notions of human-AI care. Our findings show positive perceptions and experiences of AI in care homes, but also ambivalence. While integrative care incorporating humans and technology was salient across interviewees, we also identified experiential, contextual, and knowledge divides between AI developers and care staff. For example, developers lacked experiential knowledge of care homes' daily functioning and constraints, influencing how they designed AI. Care staff demonstrated limited experiential knowledge of AI or more critical views about contexts of use, affecting their trust in these technologies. Different understandings of ‘good care’ were evident, too: ‘warm’ care was sometimes linked to human care and ‘cold’ care to technology. In conclusion, understandings and experiences of AI are marked by different logics of sociotechnical care and related levels of trust in these sensitive settings. •Stakeholders show different understandings of how AI should be used in care homes.•Trust is critical to shape perceptions and practices of human-technology care.•A logics of sociotechnical care offers a conceptual framework to map diverse views.
ISSN:0277-9536
1873-5347
1873-5347
DOI:10.1016/j.socscimed.2024.117187