The performance of large language models in intercollegiate Membership of the Royal College of Surgeons examination

Large language models (LLM), such as Chat Generative Pre-trained Transformer (ChatGPT) and Bard utilise deep learning algorithms that have been trained on a massive data set of text and code to generate human-like responses. Several studies have demonstrated satisfactory performance on postgraduate...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Annals of the Royal College of Surgeons of England 2024-11, Vol.106 (8), p.700-704
Hauptverfasser: Chan, J, Dong, T, Angelini, G D
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 704
container_issue 8
container_start_page 700
container_title Annals of the Royal College of Surgeons of England
container_volume 106
creator Chan, J
Dong, T
Angelini, G D
description Large language models (LLM), such as Chat Generative Pre-trained Transformer (ChatGPT) and Bard utilise deep learning algorithms that have been trained on a massive data set of text and code to generate human-like responses. Several studies have demonstrated satisfactory performance on postgraduate examinations, including the United States Medical Licensing Examination. We aimed to evaluate artificial intelligence performance in Part A of the intercollegiate Membership of the Royal College of Surgeons (MRCS) examination. The MRCS mock examination from Pastest, a commonly used question bank for examinees, was used to assess the performance of three LLMs: GPT-3.5, GPT 4.0 and Bard. Three hundred mock questions were input into the three LLMs, and the responses provided by the LLMs were recorded and analysed. The pass mark was set at 70%. The overall accuracies for GPT-3.5, GPT 4.0 and Bard were 67.33%, 71.67% and 65.67%, respectively ( = 0.27). The performances of GPT-3.5, GPT 4.0 and Bard in Applied Basic Sciences were 68.89%, 72.78% and 63.33% ( = 0.15), respectively. Furthermore, the three LLMs obtained correct answers in 65.00%, 70.00% and 69.17% of the Principles of Surgery in General questions ( = 0.67). There were no differences in performance in the overall and subcategories among the three LLMs. Our findings demonstrated satisfactory performance for all three LLMs in the MRCS Part A examination, with GPT 4.0 the only LLM that achieved the pass mark set.
doi_str_mv 10.1308/rcsann.2024.0023
format Article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_11528401</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3122901589</sourcerecordid><originalsourceid>FETCH-LOGICAL-c266t-fa947c2380c715762e148485c7fce029a767a9de704e925d391ebf6066797db33</originalsourceid><addsrcrecordid>eNpdkcuLFDEQxoMo7jh69yQNXrz0WHl0HieRwResCLqeQyZdPZOlOxmTbnH_ezM766JCkQrk-35U5SPkOYUN5aBfZ19cjBsGTGwAGH9AVlQo3SrQ_CFZAfCu1VrwC_KklGsAapSmj8kF10J0ktIVKVcHbI6Yh5QnFz02aWhGl_dYz7hfXL1MqcexNCHWmjH7NI64D27G5jNOO8zlEI4n21xJX9ONG5vtreSW9W2prBRLg7_cFKKbQ4pPyaPBjQWf3fU1-f7-3dX2Y3v55cOn7dvL1jMp53ZwRijPuAavaKckQyq00J1Xg0dgximpnOlRgUDDup4birtBgpTKqH7H-Zq8OXOPy27C3mOcsxvtMYfJ5RubXLD_vsRwsPv001LaMS2AVsKrO0JOPxYss51C8TjWv8G0FMsM10wbLlWVvvxPep2WHOt-llPGDNCuCtcEziqfUykZh_tpKNhTpPYcqT1Fak-RVsuLv7e4N_zJkP8GLtufeg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3122901589</pqid></control><display><type>article</type><title>The performance of large language models in intercollegiate Membership of the Royal College of Surgeons examination</title><source>Open Access: PubMed Central</source><source>MEDLINE</source><source>EZB Electronic Journals Library</source><creator>Chan, J ; Dong, T ; Angelini, G D</creator><creatorcontrib>Chan, J ; Dong, T ; Angelini, G D</creatorcontrib><description>Large language models (LLM), such as Chat Generative Pre-trained Transformer (ChatGPT) and Bard utilise deep learning algorithms that have been trained on a massive data set of text and code to generate human-like responses. Several studies have demonstrated satisfactory performance on postgraduate examinations, including the United States Medical Licensing Examination. We aimed to evaluate artificial intelligence performance in Part A of the intercollegiate Membership of the Royal College of Surgeons (MRCS) examination. The MRCS mock examination from Pastest, a commonly used question bank for examinees, was used to assess the performance of three LLMs: GPT-3.5, GPT 4.0 and Bard. Three hundred mock questions were input into the three LLMs, and the responses provided by the LLMs were recorded and analysed. The pass mark was set at 70%. The overall accuracies for GPT-3.5, GPT 4.0 and Bard were 67.33%, 71.67% and 65.67%, respectively ( = 0.27). The performances of GPT-3.5, GPT 4.0 and Bard in Applied Basic Sciences were 68.89%, 72.78% and 63.33% ( = 0.15), respectively. Furthermore, the three LLMs obtained correct answers in 65.00%, 70.00% and 69.17% of the Principles of Surgery in General questions ( = 0.67). There were no differences in performance in the overall and subcategories among the three LLMs. Our findings demonstrated satisfactory performance for all three LLMs in the MRCS Part A examination, with GPT 4.0 the only LLM that achieved the pass mark set.</description><identifier>ISSN: 0035-8843</identifier><identifier>ISSN: 1478-7083</identifier><identifier>EISSN: 1478-7083</identifier><identifier>DOI: 10.1308/rcsann.2024.0023</identifier><identifier>PMID: 38445611</identifier><language>eng</language><publisher>England: BMJ Publishing Group LTD</publisher><subject>Anatomy &amp; physiology ; Artificial Intelligence ; Chatbots ; Clinical Competence ; Deep Learning ; Educational Measurement - methods ; General Surgery - education ; Humans ; Innovation ; Language ; Large language models ; Licensing examinations ; Neurosurgery ; Performance evaluation ; Physiology ; Societies, Medical ; Surgeons ; Surgeons - education ; Surgeons - statistics &amp; numerical data ; Thoracic surgery ; United Kingdom</subject><ispartof>Annals of the Royal College of Surgeons of England, 2024-11, Vol.106 (8), p.700-704</ispartof><rights>Copyright BMJ Publishing Group LTD 2024</rights><rights>Copyright © 2024, The Authors 2024</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c266t-fa947c2380c715762e148485c7fce029a767a9de704e925d391ebf6066797db33</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC11528401/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC11528401/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,723,776,780,881,27901,27902,53766,53768</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38445611$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Chan, J</creatorcontrib><creatorcontrib>Dong, T</creatorcontrib><creatorcontrib>Angelini, G D</creatorcontrib><title>The performance of large language models in intercollegiate Membership of the Royal College of Surgeons examination</title><title>Annals of the Royal College of Surgeons of England</title><addtitle>Ann R Coll Surg Engl</addtitle><description>Large language models (LLM), such as Chat Generative Pre-trained Transformer (ChatGPT) and Bard utilise deep learning algorithms that have been trained on a massive data set of text and code to generate human-like responses. Several studies have demonstrated satisfactory performance on postgraduate examinations, including the United States Medical Licensing Examination. We aimed to evaluate artificial intelligence performance in Part A of the intercollegiate Membership of the Royal College of Surgeons (MRCS) examination. The MRCS mock examination from Pastest, a commonly used question bank for examinees, was used to assess the performance of three LLMs: GPT-3.5, GPT 4.0 and Bard. Three hundred mock questions were input into the three LLMs, and the responses provided by the LLMs were recorded and analysed. The pass mark was set at 70%. The overall accuracies for GPT-3.5, GPT 4.0 and Bard were 67.33%, 71.67% and 65.67%, respectively ( = 0.27). The performances of GPT-3.5, GPT 4.0 and Bard in Applied Basic Sciences were 68.89%, 72.78% and 63.33% ( = 0.15), respectively. Furthermore, the three LLMs obtained correct answers in 65.00%, 70.00% and 69.17% of the Principles of Surgery in General questions ( = 0.67). There were no differences in performance in the overall and subcategories among the three LLMs. Our findings demonstrated satisfactory performance for all three LLMs in the MRCS Part A examination, with GPT 4.0 the only LLM that achieved the pass mark set.</description><subject>Anatomy &amp; physiology</subject><subject>Artificial Intelligence</subject><subject>Chatbots</subject><subject>Clinical Competence</subject><subject>Deep Learning</subject><subject>Educational Measurement - methods</subject><subject>General Surgery - education</subject><subject>Humans</subject><subject>Innovation</subject><subject>Language</subject><subject>Large language models</subject><subject>Licensing examinations</subject><subject>Neurosurgery</subject><subject>Performance evaluation</subject><subject>Physiology</subject><subject>Societies, Medical</subject><subject>Surgeons</subject><subject>Surgeons - education</subject><subject>Surgeons - statistics &amp; numerical data</subject><subject>Thoracic surgery</subject><subject>United Kingdom</subject><issn>0035-8843</issn><issn>1478-7083</issn><issn>1478-7083</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNpdkcuLFDEQxoMo7jh69yQNXrz0WHl0HieRwResCLqeQyZdPZOlOxmTbnH_ezM766JCkQrk-35U5SPkOYUN5aBfZ19cjBsGTGwAGH9AVlQo3SrQ_CFZAfCu1VrwC_KklGsAapSmj8kF10J0ktIVKVcHbI6Yh5QnFz02aWhGl_dYz7hfXL1MqcexNCHWmjH7NI64D27G5jNOO8zlEI4n21xJX9ONG5vtreSW9W2prBRLg7_cFKKbQ4pPyaPBjQWf3fU1-f7-3dX2Y3v55cOn7dvL1jMp53ZwRijPuAavaKckQyq00J1Xg0dgximpnOlRgUDDup4birtBgpTKqH7H-Zq8OXOPy27C3mOcsxvtMYfJ5RubXLD_vsRwsPv001LaMS2AVsKrO0JOPxYss51C8TjWv8G0FMsM10wbLlWVvvxPep2WHOt-llPGDNCuCtcEziqfUykZh_tpKNhTpPYcqT1Fak-RVsuLv7e4N_zJkP8GLtufeg</recordid><startdate>20241101</startdate><enddate>20241101</enddate><creator>Chan, J</creator><creator>Dong, T</creator><creator>Angelini, G D</creator><general>BMJ Publishing Group LTD</general><general>Royal College of Surgeons</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>K9.</scope><scope>7X8</scope><scope>5PM</scope></search><sort><creationdate>20241101</creationdate><title>The performance of large language models in intercollegiate Membership of the Royal College of Surgeons examination</title><author>Chan, J ; Dong, T ; Angelini, G D</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c266t-fa947c2380c715762e148485c7fce029a767a9de704e925d391ebf6066797db33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Anatomy &amp; physiology</topic><topic>Artificial Intelligence</topic><topic>Chatbots</topic><topic>Clinical Competence</topic><topic>Deep Learning</topic><topic>Educational Measurement - methods</topic><topic>General Surgery - education</topic><topic>Humans</topic><topic>Innovation</topic><topic>Language</topic><topic>Large language models</topic><topic>Licensing examinations</topic><topic>Neurosurgery</topic><topic>Performance evaluation</topic><topic>Physiology</topic><topic>Societies, Medical</topic><topic>Surgeons</topic><topic>Surgeons - education</topic><topic>Surgeons - statistics &amp; numerical data</topic><topic>Thoracic surgery</topic><topic>United Kingdom</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chan, J</creatorcontrib><creatorcontrib>Dong, T</creatorcontrib><creatorcontrib>Angelini, G D</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Annals of the Royal College of Surgeons of England</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chan, J</au><au>Dong, T</au><au>Angelini, G D</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The performance of large language models in intercollegiate Membership of the Royal College of Surgeons examination</atitle><jtitle>Annals of the Royal College of Surgeons of England</jtitle><addtitle>Ann R Coll Surg Engl</addtitle><date>2024-11-01</date><risdate>2024</risdate><volume>106</volume><issue>8</issue><spage>700</spage><epage>704</epage><pages>700-704</pages><issn>0035-8843</issn><issn>1478-7083</issn><eissn>1478-7083</eissn><abstract>Large language models (LLM), such as Chat Generative Pre-trained Transformer (ChatGPT) and Bard utilise deep learning algorithms that have been trained on a massive data set of text and code to generate human-like responses. Several studies have demonstrated satisfactory performance on postgraduate examinations, including the United States Medical Licensing Examination. We aimed to evaluate artificial intelligence performance in Part A of the intercollegiate Membership of the Royal College of Surgeons (MRCS) examination. The MRCS mock examination from Pastest, a commonly used question bank for examinees, was used to assess the performance of three LLMs: GPT-3.5, GPT 4.0 and Bard. Three hundred mock questions were input into the three LLMs, and the responses provided by the LLMs were recorded and analysed. The pass mark was set at 70%. The overall accuracies for GPT-3.5, GPT 4.0 and Bard were 67.33%, 71.67% and 65.67%, respectively ( = 0.27). The performances of GPT-3.5, GPT 4.0 and Bard in Applied Basic Sciences were 68.89%, 72.78% and 63.33% ( = 0.15), respectively. Furthermore, the three LLMs obtained correct answers in 65.00%, 70.00% and 69.17% of the Principles of Surgery in General questions ( = 0.67). There were no differences in performance in the overall and subcategories among the three LLMs. Our findings demonstrated satisfactory performance for all three LLMs in the MRCS Part A examination, with GPT 4.0 the only LLM that achieved the pass mark set.</abstract><cop>England</cop><pub>BMJ Publishing Group LTD</pub><pmid>38445611</pmid><doi>10.1308/rcsann.2024.0023</doi><tpages>5</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0035-8843
ispartof Annals of the Royal College of Surgeons of England, 2024-11, Vol.106 (8), p.700-704
issn 0035-8843
1478-7083
1478-7083
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_11528401
source Open Access: PubMed Central; MEDLINE; EZB Electronic Journals Library
subjects Anatomy & physiology
Artificial Intelligence
Chatbots
Clinical Competence
Deep Learning
Educational Measurement - methods
General Surgery - education
Humans
Innovation
Language
Large language models
Licensing examinations
Neurosurgery
Performance evaluation
Physiology
Societies, Medical
Surgeons
Surgeons - education
Surgeons - statistics & numerical data
Thoracic surgery
United Kingdom
title The performance of large language models in intercollegiate Membership of the Royal College of Surgeons examination
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T15%3A40%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20performance%20of%20large%20language%20models%20in%20intercollegiate%20Membership%20of%20the%20Royal%20College%20of%20Surgeons%20examination&rft.jtitle=Annals%20of%20the%20Royal%20College%20of%20Surgeons%20of%20England&rft.au=Chan,%20J&rft.date=2024-11-01&rft.volume=106&rft.issue=8&rft.spage=700&rft.epage=704&rft.pages=700-704&rft.issn=0035-8843&rft.eissn=1478-7083&rft_id=info:doi/10.1308/rcsann.2024.0023&rft_dat=%3Cproquest_pubme%3E3122901589%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3122901589&rft_id=info:pmid/38445611&rfr_iscdi=true