Comparing performance in a general ophthalmology exam: Artificial intelligence versus ophthalmologists, residents and general practitioners
Aims/Purpose: To compare the results on a general ophthalmology exam consisting of 68 questions between two artificial intelligence systems (ChatGPT 3.5 and Google Gemini) against ophthalmologists, residents, and medical graduates Methods: A total of 8 participants were recruited: 4 ophthalmologists...
Gespeichert in:
Veröffentlicht in: | Acta ophthalmologica (Oxford, England) England), 2025-01, Vol.103 (S284), p.n/a |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Aims/Purpose: To compare the results on a general ophthalmology exam consisting of 68 questions between two artificial intelligence systems (ChatGPT 3.5 and Google Gemini) against ophthalmologists, residents, and medical graduates
Methods: A total of 8 participants were recruited: 4 ophthalmologists, two first and third‐year residents, and two medical graduates after taking the Spanish medical residency entrance exam. Each subject underwent a standardized general ophthalmology exam, consisting of 68 questions in the form of clinical cases, covering various relevant aspects of the ophthalmological discipline. The scores obtained were recorded and statistically analyzed. The same exam was performed by ChatGPT 3.5 and Google Gemini. A one‐way ANOVA test with Tukey correction was conducted to compare the different groups.
Results: ChatGPT 3.5 and Google Gemini demonstrated acceptable competency, with an average accuracy of 48.8%, but their performance was lower than that of residents (average of 68.4%) and attending ophthalmologists (average of 79.4%). However, their performance was only slightly lower than that of medical graduates without clinical experience in ophthalmology, with an average accuracy of 52.2%. Statistically significant differences were observed between attending ophthalmologists with artificial intelligence and medical graduates (p |
---|---|
ISSN: | 1755-375X 1755-3768 |
DOI: | 10.1111/aos.17158 |