Assessment of ChatGPT success with specialty medical knowledge using anaesthesiology board examination practice questions

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:British journal of anaesthesia : BJA 2023-08, Vol.131 (2), p.e31-e34
Hauptverfasser: Shay, Denys, Kumar, Bhawesh, Bellamy, David, Palepu, Anil, Dershwitz, Mark, Walz, Jens M., Schaefer, Maximilian S., Beam, Andrew
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page e34
container_issue 2
container_start_page e31
container_title British journal of anaesthesia : BJA
container_volume 131
creator Shay, Denys
Kumar, Bhawesh
Bellamy, David
Palepu, Anil
Dershwitz, Mark
Walz, Jens M.
Schaefer, Maximilian S.
Beam, Andrew
description
doi_str_mv 10.1016/j.bja.2023.04.017
format Article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_11375459</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0007091223001927</els_id><sourcerecordid>2816761586</sourcerecordid><originalsourceid>FETCH-LOGICAL-c452t-c4708233dc8b5715e9fc9340dcff8d84c501811c3070d7f04f178aa8454f93a83</originalsourceid><addsrcrecordid>eNp9kcFu1DAQhi0EokvhAbggH7lsmEmc2BEHVK2gIFWih3K2vM5k10tiL3bSdt8et1sqeunFlsbf_B77Y-w9QoGAzaddsd6ZooSyKkAUgPIFW6CQuGykxJdsAQByCS2WJ-xNSjvIRNnWr9lJJUuEUqoFO5ylRCmN5Cceer7amun88oqn2dpc5jdu2vK0J-vMMB34SJ2zZuC_fbgZqNsQn5PzG268oTRtKbkwhM2Br4OJHadbMzpvJhc830djJ2eJ_5kzmSvpLXvVmyHRu4f9lP369vVq9X158fP8x-rsYmlFXU55laDKquqsWtcSa2p721YCOtv3qlPC1oAK0VYgoZM9iB6lMkaJWvRtZVR1yr4cc_fzOs9v80ujGfQ-utHEgw7G6acn3m31JlxrxErWom5zwseHhBjux9ejS5aGwXgKc9KlwkY2WKsmo3hEbQwpReof70HQd870Tmdn-s6ZBqGzkdzz4f8BHzv-ScrA5yNA-ZuuHUWdrCNvs4xIdtJdcM_E_wXMZ6rD</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2816761586</pqid></control><display><type>article</type><title>Assessment of ChatGPT success with specialty medical knowledge using anaesthesiology board examination practice questions</title><source>MEDLINE</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>Alma/SFX Local Collection</source><creator>Shay, Denys ; Kumar, Bhawesh ; Bellamy, David ; Palepu, Anil ; Dershwitz, Mark ; Walz, Jens M. ; Schaefer, Maximilian S. ; Beam, Andrew</creator><creatorcontrib>Shay, Denys ; Kumar, Bhawesh ; Bellamy, David ; Palepu, Anil ; Dershwitz, Mark ; Walz, Jens M. ; Schaefer, Maximilian S. ; Beam, Andrew</creatorcontrib><identifier>ISSN: 0007-0912</identifier><identifier>ISSN: 1471-6771</identifier><identifier>EISSN: 1471-6771</identifier><identifier>DOI: 10.1016/j.bja.2023.04.017</identifier><identifier>PMID: 37210278</identifier><language>eng</language><publisher>England: Elsevier Ltd</publisher><subject>Academic Performance ; Anesthesiology ; Artificial Intelligence ; board examination ; ChatGPT ; Correspondence ; Humans ; large language models ; medical knowledge ; multiple choice questions ; specialty qualifications</subject><ispartof>British journal of anaesthesia : BJA, 2023-08, Vol.131 (2), p.e31-e34</ispartof><rights>2023 British Journal of Anaesthesia</rights><rights>2023 British Journal of Anaesthesia. Published by Elsevier Ltd. 2023 British Journal of Anaesthesia</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c452t-c4708233dc8b5715e9fc9340dcff8d84c501811c3070d7f04f178aa8454f93a83</citedby><cites>FETCH-LOGICAL-c452t-c4708233dc8b5715e9fc9340dcff8d84c501811c3070d7f04f178aa8454f93a83</cites><orcidid>0000-0002-6878-0803 ; 0000-0002-6657-2787 ; 0000-0002-4720-8787 ; 0000-0001-9242-4094 ; 0000-0001-5198-6453</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,776,780,881,27901,27902</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37210278$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Shay, Denys</creatorcontrib><creatorcontrib>Kumar, Bhawesh</creatorcontrib><creatorcontrib>Bellamy, David</creatorcontrib><creatorcontrib>Palepu, Anil</creatorcontrib><creatorcontrib>Dershwitz, Mark</creatorcontrib><creatorcontrib>Walz, Jens M.</creatorcontrib><creatorcontrib>Schaefer, Maximilian S.</creatorcontrib><creatorcontrib>Beam, Andrew</creatorcontrib><title>Assessment of ChatGPT success with specialty medical knowledge using anaesthesiology board examination practice questions</title><title>British journal of anaesthesia : BJA</title><addtitle>Br J Anaesth</addtitle><subject>Academic Performance</subject><subject>Anesthesiology</subject><subject>Artificial Intelligence</subject><subject>board examination</subject><subject>ChatGPT</subject><subject>Correspondence</subject><subject>Humans</subject><subject>large language models</subject><subject>medical knowledge</subject><subject>multiple choice questions</subject><subject>specialty qualifications</subject><issn>0007-0912</issn><issn>1471-6771</issn><issn>1471-6771</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp9kcFu1DAQhi0EokvhAbggH7lsmEmc2BEHVK2gIFWih3K2vM5k10tiL3bSdt8et1sqeunFlsbf_B77Y-w9QoGAzaddsd6ZooSyKkAUgPIFW6CQuGykxJdsAQByCS2WJ-xNSjvIRNnWr9lJJUuEUqoFO5ylRCmN5Cceer7amun88oqn2dpc5jdu2vK0J-vMMB34SJ2zZuC_fbgZqNsQn5PzG268oTRtKbkwhM2Br4OJHadbMzpvJhc830djJ2eJ_5kzmSvpLXvVmyHRu4f9lP369vVq9X158fP8x-rsYmlFXU55laDKquqsWtcSa2p721YCOtv3qlPC1oAK0VYgoZM9iB6lMkaJWvRtZVR1yr4cc_fzOs9v80ujGfQ-utHEgw7G6acn3m31JlxrxErWom5zwseHhBjux9ejS5aGwXgKc9KlwkY2WKsmo3hEbQwpReof70HQd870Tmdn-s6ZBqGzkdzz4f8BHzv-ScrA5yNA-ZuuHUWdrCNvs4xIdtJdcM_E_wXMZ6rD</recordid><startdate>20230801</startdate><enddate>20230801</enddate><creator>Shay, Denys</creator><creator>Kumar, Bhawesh</creator><creator>Bellamy, David</creator><creator>Palepu, Anil</creator><creator>Dershwitz, Mark</creator><creator>Walz, Jens M.</creator><creator>Schaefer, Maximilian S.</creator><creator>Beam, Andrew</creator><general>Elsevier Ltd</general><general>Elsevier</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0002-6878-0803</orcidid><orcidid>https://orcid.org/0000-0002-6657-2787</orcidid><orcidid>https://orcid.org/0000-0002-4720-8787</orcidid><orcidid>https://orcid.org/0000-0001-9242-4094</orcidid><orcidid>https://orcid.org/0000-0001-5198-6453</orcidid></search><sort><creationdate>20230801</creationdate><title>Assessment of ChatGPT success with specialty medical knowledge using anaesthesiology board examination practice questions</title><author>Shay, Denys ; Kumar, Bhawesh ; Bellamy, David ; Palepu, Anil ; Dershwitz, Mark ; Walz, Jens M. ; Schaefer, Maximilian S. ; Beam, Andrew</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c452t-c4708233dc8b5715e9fc9340dcff8d84c501811c3070d7f04f178aa8454f93a83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Academic Performance</topic><topic>Anesthesiology</topic><topic>Artificial Intelligence</topic><topic>board examination</topic><topic>ChatGPT</topic><topic>Correspondence</topic><topic>Humans</topic><topic>large language models</topic><topic>medical knowledge</topic><topic>multiple choice questions</topic><topic>specialty qualifications</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shay, Denys</creatorcontrib><creatorcontrib>Kumar, Bhawesh</creatorcontrib><creatorcontrib>Bellamy, David</creatorcontrib><creatorcontrib>Palepu, Anil</creatorcontrib><creatorcontrib>Dershwitz, Mark</creatorcontrib><creatorcontrib>Walz, Jens M.</creatorcontrib><creatorcontrib>Schaefer, Maximilian S.</creatorcontrib><creatorcontrib>Beam, Andrew</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>British journal of anaesthesia : BJA</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shay, Denys</au><au>Kumar, Bhawesh</au><au>Bellamy, David</au><au>Palepu, Anil</au><au>Dershwitz, Mark</au><au>Walz, Jens M.</au><au>Schaefer, Maximilian S.</au><au>Beam, Andrew</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Assessment of ChatGPT success with specialty medical knowledge using anaesthesiology board examination practice questions</atitle><jtitle>British journal of anaesthesia : BJA</jtitle><addtitle>Br J Anaesth</addtitle><date>2023-08-01</date><risdate>2023</risdate><volume>131</volume><issue>2</issue><spage>e31</spage><epage>e34</epage><pages>e31-e34</pages><issn>0007-0912</issn><issn>1471-6771</issn><eissn>1471-6771</eissn><cop>England</cop><pub>Elsevier Ltd</pub><pmid>37210278</pmid><doi>10.1016/j.bja.2023.04.017</doi><orcidid>https://orcid.org/0000-0002-6878-0803</orcidid><orcidid>https://orcid.org/0000-0002-6657-2787</orcidid><orcidid>https://orcid.org/0000-0002-4720-8787</orcidid><orcidid>https://orcid.org/0000-0001-9242-4094</orcidid><orcidid>https://orcid.org/0000-0001-5198-6453</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0007-0912
ispartof British journal of anaesthesia : BJA, 2023-08, Vol.131 (2), p.e31-e34
issn 0007-0912
1471-6771
1471-6771
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_11375459
source MEDLINE; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; Alma/SFX Local Collection
subjects Academic Performance
Anesthesiology
Artificial Intelligence
board examination
ChatGPT
Correspondence
Humans
large language models
medical knowledge
multiple choice questions
specialty qualifications
title Assessment of ChatGPT success with specialty medical knowledge using anaesthesiology board examination practice questions
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T13%3A22%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Assessment%20of%20ChatGPT%20success%20with%20specialty%20medical%20knowledge%20using%20anaesthesiology%20board%20examination%20practice%20questions&rft.jtitle=British%20journal%20of%20anaesthesia%20:%20BJA&rft.au=Shay,%20Denys&rft.date=2023-08-01&rft.volume=131&rft.issue=2&rft.spage=e31&rft.epage=e34&rft.pages=e31-e34&rft.issn=0007-0912&rft.eissn=1471-6771&rft_id=info:doi/10.1016/j.bja.2023.04.017&rft_dat=%3Cproquest_pubme%3E2816761586%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2816761586&rft_id=info:pmid/37210278&rft_els_id=S0007091223001927&rfr_iscdi=true