Accuracy of Information and References Using ChatGPT-3 for Retrieval of Clinical Radiological Information

Purpose: To assess the accuracy of answers provided by ChatGPT-3 when prompted with questions from the daily routine of radiologists and to evaluate the text response when ChatGPT-3 was prompted to provide references for a given answer. Methods: ChatGPT-3 (San Francisco, OpenAI) is an artificial int...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Canadian Association of Radiologists journal 2024-02, Vol.75 (1), p.69-73
Hauptverfasser: Wagner, Matthias W., Ertl-Wagner, Birgit B.
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 73
container_issue 1
container_start_page 69
container_title Canadian Association of Radiologists journal
container_volume 75
creator Wagner, Matthias W.
Ertl-Wagner, Birgit B.
description Purpose: To assess the accuracy of answers provided by ChatGPT-3 when prompted with questions from the daily routine of radiologists and to evaluate the text response when ChatGPT-3 was prompted to provide references for a given answer. Methods: ChatGPT-3 (San Francisco, OpenAI) is an artificial intelligence chatbot based on a large language model (LLM) that has been designed to generate human-like text. A total of 88 questions were submitted to ChatGPT-3 using textual prompt. These 88 questions were equally dispersed across 8 subspecialty areas of radiology. The responses provided by ChatGPT-3 were assessed for correctness by cross-checking them with peer-reviewed, PubMed-listed references. In addition, the references provided by ChatGPT-3 were evaluated for authenticity. Results: A total of 59 of 88 responses (67%) to radiological questions were correct, while 29 responses (33%) had errors. Out of 343 references provided, only 124 references (36.2%) were available through internet search, while 219 references (63.8%) appeared to be generated by ChatGPT-3. When examining the 124 identified references, only 47 references (37.9%) were considered to provide enough background to correctly answer 24 questions (37.5%). Conclusion: In this pilot study, ChatGPT-3 provided correct responses to questions from the daily clinical routine of radiologists in only about two thirds, while the remainder of responses contained errors. The majority of provided references were not found and only a minority of the provided references contained the correct information to answer the question. Caution is advised when using ChatGPT-3 to retrieve radiological information. Graphical Abstract
doi_str_mv 10.1177/08465371231171125
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2803965661</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sage_id>10.1177_08465371231171125</sage_id><sourcerecordid>2803965661</sourcerecordid><originalsourceid>FETCH-LOGICAL-c449t-f2b9c707e9643c272d524d919e614cc58527023bc27b2268469edca5572348353</originalsourceid><addsrcrecordid>eNp9kF9LwzAUxYMobk4_gC_SR18687dpHkfRORgoY3suaZrOjLaZSSvs25u6KYLgU3I5v3O49wBwi-AUIc4fYEoTRjjCJIwIYXYGxoimaYxJgs7BeNDjARiBK-93EEJKuLgEI8IhT2kqxsDMlOqdVIfIVtGiraxrZGdsG8m2jFa60k63Svto4027jbI32c1f1zGJAhjkzhn9IevBm9WmNSr8V7I0trbbr-FX4jW4qGTt9c3pnYDN0-M6e46XL_NFNlvGilLRxRUuhArraZFQojDHJcO0FEjoBFGlWMowh5gUQSowTsKFQpdKMsYxoSlhZALuj7l7Z9977bu8MV7pupattr3PcQqJSFiSoICiI6qc9d7pKt8700h3yBHMh4bzPw0Hz90pvi8aXf44visNwPQIeLnV-c72rg3n_pP4CQV6gcY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2803965661</pqid></control><display><type>article</type><title>Accuracy of Information and References Using ChatGPT-3 for Retrieval of Clinical Radiological Information</title><source>SAGE Premier Complete</source><creator>Wagner, Matthias W. ; Ertl-Wagner, Birgit B.</creator><creatorcontrib>Wagner, Matthias W. ; Ertl-Wagner, Birgit B.</creatorcontrib><description>Purpose: To assess the accuracy of answers provided by ChatGPT-3 when prompted with questions from the daily routine of radiologists and to evaluate the text response when ChatGPT-3 was prompted to provide references for a given answer. Methods: ChatGPT-3 (San Francisco, OpenAI) is an artificial intelligence chatbot based on a large language model (LLM) that has been designed to generate human-like text. A total of 88 questions were submitted to ChatGPT-3 using textual prompt. These 88 questions were equally dispersed across 8 subspecialty areas of radiology. The responses provided by ChatGPT-3 were assessed for correctness by cross-checking them with peer-reviewed, PubMed-listed references. In addition, the references provided by ChatGPT-3 were evaluated for authenticity. Results: A total of 59 of 88 responses (67%) to radiological questions were correct, while 29 responses (33%) had errors. Out of 343 references provided, only 124 references (36.2%) were available through internet search, while 219 references (63.8%) appeared to be generated by ChatGPT-3. When examining the 124 identified references, only 47 references (37.9%) were considered to provide enough background to correctly answer 24 questions (37.5%). Conclusion: In this pilot study, ChatGPT-3 provided correct responses to questions from the daily clinical routine of radiologists in only about two thirds, while the remainder of responses contained errors. The majority of provided references were not found and only a minority of the provided references contained the correct information to answer the question. Caution is advised when using ChatGPT-3 to retrieve radiological information. Graphical Abstract</description><identifier>ISSN: 0846-5371</identifier><identifier>EISSN: 1488-2361</identifier><identifier>DOI: 10.1177/08465371231171125</identifier><identifier>PMID: 37078489</identifier><language>eng</language><publisher>Los Angeles, CA: SAGE Publications</publisher><ispartof>Canadian Association of Radiologists journal, 2024-02, Vol.75 (1), p.69-73</ispartof><rights>The Author(s) 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c449t-f2b9c707e9643c272d524d919e614cc58527023bc27b2268469edca5572348353</citedby><cites>FETCH-LOGICAL-c449t-f2b9c707e9643c272d524d919e614cc58527023bc27b2268469edca5572348353</cites><orcidid>0000-0001-6501-839X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://journals.sagepub.com/doi/pdf/10.1177/08465371231171125$$EPDF$$P50$$Gsage$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://journals.sagepub.com/doi/10.1177/08465371231171125$$EHTML$$P50$$Gsage$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,21800,27903,27904,43600,43601</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37078489$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Wagner, Matthias W.</creatorcontrib><creatorcontrib>Ertl-Wagner, Birgit B.</creatorcontrib><title>Accuracy of Information and References Using ChatGPT-3 for Retrieval of Clinical Radiological Information</title><title>Canadian Association of Radiologists journal</title><addtitle>Can Assoc Radiol J</addtitle><description>Purpose: To assess the accuracy of answers provided by ChatGPT-3 when prompted with questions from the daily routine of radiologists and to evaluate the text response when ChatGPT-3 was prompted to provide references for a given answer. Methods: ChatGPT-3 (San Francisco, OpenAI) is an artificial intelligence chatbot based on a large language model (LLM) that has been designed to generate human-like text. A total of 88 questions were submitted to ChatGPT-3 using textual prompt. These 88 questions were equally dispersed across 8 subspecialty areas of radiology. The responses provided by ChatGPT-3 were assessed for correctness by cross-checking them with peer-reviewed, PubMed-listed references. In addition, the references provided by ChatGPT-3 were evaluated for authenticity. Results: A total of 59 of 88 responses (67%) to radiological questions were correct, while 29 responses (33%) had errors. Out of 343 references provided, only 124 references (36.2%) were available through internet search, while 219 references (63.8%) appeared to be generated by ChatGPT-3. When examining the 124 identified references, only 47 references (37.9%) were considered to provide enough background to correctly answer 24 questions (37.5%). Conclusion: In this pilot study, ChatGPT-3 provided correct responses to questions from the daily clinical routine of radiologists in only about two thirds, while the remainder of responses contained errors. The majority of provided references were not found and only a minority of the provided references contained the correct information to answer the question. Caution is advised when using ChatGPT-3 to retrieve radiological information. Graphical Abstract</description><issn>0846-5371</issn><issn>1488-2361</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>AFRWT</sourceid><recordid>eNp9kF9LwzAUxYMobk4_gC_SR18687dpHkfRORgoY3suaZrOjLaZSSvs25u6KYLgU3I5v3O49wBwi-AUIc4fYEoTRjjCJIwIYXYGxoimaYxJgs7BeNDjARiBK-93EEJKuLgEI8IhT2kqxsDMlOqdVIfIVtGiraxrZGdsG8m2jFa60k63Svto4027jbI32c1f1zGJAhjkzhn9IevBm9WmNSr8V7I0trbbr-FX4jW4qGTt9c3pnYDN0-M6e46XL_NFNlvGilLRxRUuhArraZFQojDHJcO0FEjoBFGlWMowh5gUQSowTsKFQpdKMsYxoSlhZALuj7l7Z9977bu8MV7pupattr3PcQqJSFiSoICiI6qc9d7pKt8700h3yBHMh4bzPw0Hz90pvi8aXf44visNwPQIeLnV-c72rg3n_pP4CQV6gcY</recordid><startdate>20240201</startdate><enddate>20240201</enddate><creator>Wagner, Matthias W.</creator><creator>Ertl-Wagner, Birgit B.</creator><general>SAGE Publications</general><scope>AFRWT</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-6501-839X</orcidid></search><sort><creationdate>20240201</creationdate><title>Accuracy of Information and References Using ChatGPT-3 for Retrieval of Clinical Radiological Information</title><author>Wagner, Matthias W. ; Ertl-Wagner, Birgit B.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c449t-f2b9c707e9643c272d524d919e614cc58527023bc27b2268469edca5572348353</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wagner, Matthias W.</creatorcontrib><creatorcontrib>Ertl-Wagner, Birgit B.</creatorcontrib><collection>Sage Journals GOLD Open Access 2024</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Canadian Association of Radiologists journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wagner, Matthias W.</au><au>Ertl-Wagner, Birgit B.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Accuracy of Information and References Using ChatGPT-3 for Retrieval of Clinical Radiological Information</atitle><jtitle>Canadian Association of Radiologists journal</jtitle><addtitle>Can Assoc Radiol J</addtitle><date>2024-02-01</date><risdate>2024</risdate><volume>75</volume><issue>1</issue><spage>69</spage><epage>73</epage><pages>69-73</pages><issn>0846-5371</issn><eissn>1488-2361</eissn><abstract>Purpose: To assess the accuracy of answers provided by ChatGPT-3 when prompted with questions from the daily routine of radiologists and to evaluate the text response when ChatGPT-3 was prompted to provide references for a given answer. Methods: ChatGPT-3 (San Francisco, OpenAI) is an artificial intelligence chatbot based on a large language model (LLM) that has been designed to generate human-like text. A total of 88 questions were submitted to ChatGPT-3 using textual prompt. These 88 questions were equally dispersed across 8 subspecialty areas of radiology. The responses provided by ChatGPT-3 were assessed for correctness by cross-checking them with peer-reviewed, PubMed-listed references. In addition, the references provided by ChatGPT-3 were evaluated for authenticity. Results: A total of 59 of 88 responses (67%) to radiological questions were correct, while 29 responses (33%) had errors. Out of 343 references provided, only 124 references (36.2%) were available through internet search, while 219 references (63.8%) appeared to be generated by ChatGPT-3. When examining the 124 identified references, only 47 references (37.9%) were considered to provide enough background to correctly answer 24 questions (37.5%). Conclusion: In this pilot study, ChatGPT-3 provided correct responses to questions from the daily clinical routine of radiologists in only about two thirds, while the remainder of responses contained errors. The majority of provided references were not found and only a minority of the provided references contained the correct information to answer the question. Caution is advised when using ChatGPT-3 to retrieve radiological information. Graphical Abstract</abstract><cop>Los Angeles, CA</cop><pub>SAGE Publications</pub><pmid>37078489</pmid><doi>10.1177/08465371231171125</doi><tpages>5</tpages><orcidid>https://orcid.org/0000-0001-6501-839X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0846-5371
ispartof Canadian Association of Radiologists journal, 2024-02, Vol.75 (1), p.69-73
issn 0846-5371
1488-2361
language eng
recordid cdi_proquest_miscellaneous_2803965661
source SAGE Premier Complete
title Accuracy of Information and References Using ChatGPT-3 for Retrieval of Clinical Radiological Information
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T20%3A12%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Accuracy%20of%20Information%20and%20References%20Using%20ChatGPT-3%20for%20Retrieval%20of%20Clinical%20Radiological%20Information&rft.jtitle=Canadian%20Association%20of%20Radiologists%20journal&rft.au=Wagner,%20Matthias%20W.&rft.date=2024-02-01&rft.volume=75&rft.issue=1&rft.spage=69&rft.epage=73&rft.pages=69-73&rft.issn=0846-5371&rft.eissn=1488-2361&rft_id=info:doi/10.1177/08465371231171125&rft_dat=%3Cproquest_cross%3E2803965661%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2803965661&rft_id=info:pmid/37078489&rft_sage_id=10.1177_08465371231171125&rfr_iscdi=true