ChatGPT versus strabismus specialist on common questions about strabismus management: a comparative analysis of appropriateness and readability
Objective: Patients widely use artificial intelligence-based chatbots, and this study aims to determine their utility and limitations on questions about strabismus. The answers to the common questions about the management of strabismus provided by Chat Generative Pre-trained Transformer (ChatGPT)-3....
Gespeichert in:
Veröffentlicht in: | Marmara Medical Journal 2024-01, Vol.37 (3), p.323-326 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Objective: Patients widely use artificial intelligence-based chatbots, and this study aims to determine their utility and limitations on
questions about strabismus. The answers to the common questions about the management of strabismus provided by Chat Generative
Pre-trained Transformer (ChatGPT)-3.5, an artificial intelligence-powered chatbot, were compared to answers from a strabismus
specialist (The Specialist) in terms of appropriateness and readability.
Patients and Methods: In this descriptive, cross-sectional study, a list of questions from strabismus patients or caregivers in outpatient
clinics about treatment, prognosis, postoperative care, and complications were subjected to ChatGPT and The Specialist. The answers
of ChatGPT were classified as appropriate or not, considering the answers of The Specialist as the reference. The readability of all the
answers was assessed according to the parameters of the Readable online toolkit.
Results: All answers provided by ChatGPT were classified as appropriate. The mean Flesch Kincaid Grade Levels of the respective
answers given by ChatGPT and The Specialist were 13.75±1.55 and 10.17±2.17 (p |
---|---|
ISSN: | 1019-1941 1309-9469 |
DOI: | 10.5472/marumj.1571218 |