Evaluation of GPT-3.5 and GPT-4 for supporting real-world information needs in healthcare delivery
Despite growing interest in using large language models (LLMs) in healthcare, current explorations do not assess the real-world utility and safety of LLMs in clinical settings. Our objective was to determine whether two LLMs can serve information needs submitted by physicians as questions to an info...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Despite growing interest in using large language models (LLMs) in healthcare,
current explorations do not assess the real-world utility and safety of LLMs in
clinical settings. Our objective was to determine whether two LLMs can serve
information needs submitted by physicians as questions to an informatics
consultation service in a safe and concordant manner. Sixty six questions from
an informatics consult service were submitted to GPT-3.5 and GPT-4 via simple
prompts. 12 physicians assessed the LLM responses' possibility of patient harm
and concordance with existing reports from an informatics consultation service.
Physician assessments were summarized based on majority vote. For no questions
did a majority of physicians deem either LLM response as harmful. For GPT-3.5,
responses to 8 questions were concordant with the informatics consult report,
20 discordant, and 9 were unable to be assessed. There were 29 responses with
no majority on "Agree", "Disagree", and "Unable to assess". For GPT-4,
responses to 13 questions were concordant, 15 discordant, and 3 were unable to
be assessed. There were 35 responses with no majority. Responses from both LLMs
were largely devoid of overt harm, but less than 20% of the responses agreed
with an answer from an informatics consultation service, responses contained
hallucinated references, and physicians were divided on what constitutes harm.
These results suggest that while general purpose LLMs are able to provide safe
and credible responses, they often do not meet the specific information need of
a given question. A definitive evaluation of the usefulness of LLMs in
healthcare settings will likely require additional research on prompt
engineering, calibration, and custom-tailoring of general purpose models. |
---|---|
DOI: | 10.48550/arxiv.2304.13714 |