Evaluation of an Artificial Intelligence Chatbot for Delivery of IR Patient Education Material: A Comparison with Societal Website Content
To assess the accuracy, completeness, and readability of patient educational material produced by a machine learning model and compare the output to that provided by a societal website. Content from the Society of Interventional Radiology Patient Center website was retrieved, categorized, and organi...
Gespeichert in:
Veröffentlicht in: | Journal of vascular and interventional radiology 2023-10, Vol.34 (10), p.1760-1768.e32 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | To assess the accuracy, completeness, and readability of patient educational material produced by a machine learning model and compare the output to that provided by a societal website.
Content from the Society of Interventional Radiology Patient Center website was retrieved, categorized, and organized into discrete questions. These questions were entered into the ChatGPT platform, and the output was analyzed for word and sentence counts, readability using multiple validated scales, factual correctness, and suitability for patient education using the Patient Education Materials Assessment Tool for Printable Materials (PEMAT-P) instrument.
A total of 21,154 words were analyzed, including 7,917 words from the website and 13,377 words representing the total output of the ChatGPT platform across 22 text passages. Compared to the societal website, output from the ChatGPT platform was longer and more difficult to read on 4 of 5 readability scales. The ChatGPT output was incorrect for 12 (11.5%) of 104 questions. When reviewed using the PEMAT-P tool, the ChatGPT content scored lower than the website material. Content from both the website and ChatGPT were significantly above the recommended fifth or sixth grade level for patient education, with a mean Flesch-Kincaid grade level of 11.1 (±1.3) for the website and 11.9 (±1.6) for the ChatGPT content.
The ChatGPT platform may produce incomplete or inaccurate patient educational content, and providers should be familiar with the limitations of the system in its current form. Opportunities may exist to fine-tune existing large language models, which could be optimized for the delivery of patient educational content.
[Display omitted] |
---|---|
ISSN: | 1051-0443 1535-7732 |
DOI: | 10.1016/j.jvir.2023.05.037 |