Centering health equity in large language model deployment

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. [...]LLM products that organizations purchase for clinicians will contain behind-the-scenes prompting that shape how equitable responses are. [...]a recent study found that w...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:PLOS digital health 2023-10, Vol.2 (10), p.e0000367
Hauptverfasser: Singh, Nina, Lawrence, Katharine, Richardson, Safiya, Mann, Devin M
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. [...]LLM products that organizations purchase for clinicians will contain behind-the-scenes prompting that shape how equitable responses are. [...]a recent study found that when prompted with symptoms of high-risk chest pain and a patient’s insurance status, Chat-GPT correctly triaged insured patients to the emergency department but inappropriately suggested that uninsured patients either present to a community health center (less costly treatment venue) or the emergency department [21]. Beth Israel Deaconess Medical Center is starting to intentionally engage medical trainees with LLMs to allow them to better understand what they can and cannot do [25], and NYU Langone Health has educated various types of staff members throughout the healthcare system and partnered with them in developing and testing their own LLM ideas responsibly within a private and secure GPT instance [26].
ISSN:2767-3170
2767-3170
DOI:10.1371/journal.pdig.0000367