Assessment Study of ChatGPT-3.5's Performance on the Final Polish Medical Examination: Accuracy in Answering 980 Questions
The use of artificial intelligence (AI) in education is dynamically growing, and models such as ChatGPT show potential in enhancing medical education. In Poland, to obtain a medical diploma, candidates must pass the Medical Final Examination, which consists of 200 questions with one correct answer p...
Gespeichert in:
Veröffentlicht in: | Healthcare (Basel) 2024-08, Vol.12 (16), p.1637 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The use of artificial intelligence (AI) in education is dynamically growing, and models such as ChatGPT show potential in enhancing medical education. In Poland, to obtain a medical diploma, candidates must pass the Medical Final Examination, which consists of 200 questions with one correct answer per question, is administered in Polish, and assesses students' comprehensive medical knowledge and readiness for clinical practice. The aim of this study was to determine how ChatGPT-3.5 handles questions included in this exam.
This study considered 980 questions from five examination sessions of the Medical Final Examination conducted by the Medical Examination Center in the years 2022-2024. The analysis included the field of medicine, the difficulty index of the questions, and their type, namely theoretical versus case-study questions.
The average correct answer rate achieved by ChatGPT for the five examination sessions hovered around 60% and was lower (
< 0.001) than the average score achieved by the examinees. The lowest percentage of correct answers was in hematology (42.1%), while the highest was in endocrinology (78.6%). The difficulty index of the questions showed a statistically significant correlation with the correctness of the answers (
= 0.04). Questions for which ChatGPT-3.5 provided incorrect answers had a lower (
< 0.001) percentage of correct responses. The type of questions analyzed did not significantly affect the correctness of the answers (
= 0.46).
This study indicates that ChatGPT-3.5 can be an effective tool for assisting in passing the final medical exam, but the results should be interpreted cautiously. It is recommended to further verify the correctness of the answers using various AI tools. |
---|---|
ISSN: | 2227-9032 2227-9032 |
DOI: | 10.3390/healthcare12161637 |