Evaluating ChatGPT as a self‐learning tool in medical biochemistry: A performance assessment in undergraduate medical university examination

The emergence of ChatGPT as one of the most advanced chatbots and its ability to generate diverse data has given room for numerous discussions worldwide regarding its utility, particularly in advancing medical education and research. This study seeks to assess the performance of ChatGPT in medical b...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Biochemistry and molecular biology education 2024-03, Vol.52 (2), p.237-248
Hauptverfasser: Surapaneni, Krishna Mohan, Rajajagadeesan, Anusha, Goudhaman, Lakshmi, Lakshmanan, Shalini, Sundaramoorthi, Saranya, Ravi, Dineshkumar, Rajendiran, Kalaiselvi, Swaminathan, Porchelvan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The emergence of ChatGPT as one of the most advanced chatbots and its ability to generate diverse data has given room for numerous discussions worldwide regarding its utility, particularly in advancing medical education and research. This study seeks to assess the performance of ChatGPT in medical biochemistry to evaluate its potential as an effective self‐learning tool for medical students. This evaluation was carried out using the university examination question papers of both parts 1 and 2 of medical biochemistry which comprised theory and multiple choice questions (MCQs) accounting for a total of 100 in each part. The questions were used to interact with ChatGPT, and three raters independently reviewed and scored the answers to prevent bias in scoring. We conducted the inter‐item correlation matrix and the interclass correlation between raters 1, 2, and 3. For MCQs, symmetric measures in the form of kappa value (a measure of agreement) were performed between raters 1, 2, and 3. ChatGPT generated relevant and appropriate answers to all questions along with explanations for MCQs. ChatGPT has “passed” the medical biochemistry university examination with an average score of 117 out of 200 (58%) in both papers. In Paper 1, ChatGPT has secured 60 ± 2.29 and 57 ± 4.36 in Paper 2. The kappa value for all the cross‐analysis of Rater 1, Rater 2, and Rater 3 scores in MCQ was 1.000. The evaluation of ChatGPT as a self‐learning tool in medical biochemistry has yielded important insights. While it is encouraging that ChatGPT has demonstrated proficiency in this area, the overall score of 58% indicates that there is work to be done. To unlock its full potential as a self‐learning tool, ChatGPT must focus on generating not only accurate but also comprehensive and contextually relevant content.
ISSN:1470-8175
1539-3429
DOI:10.1002/bmb.21808