An extensive benchmark study on biomedical text generation and mining with ChatGPT

Abstract Motivation In recent years, the development of natural language process (NLP) technologies and deep learning hardware has led to significant improvement in large language models (LLMs). The ChatGPT, the state-of-the-art LLM built on GPT-3.5 and GPT-4, shows excellent capabilities in general...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Bioinformatics (Oxford, England) England), 2023-09, Vol.39 (9)
Hauptverfasser: Chen, Qijie, Sun, Haotong, Liu, Haoyang, Jiang, Yinghui, Ran, Ting, Jin, Xurui, Xiao, Xianglu, Lin, Zhimin, Chen, Hongming, Niu, Zhangmin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 9
container_start_page
container_title Bioinformatics (Oxford, England)
container_volume 39
creator Chen, Qijie
Sun, Haotong
Liu, Haoyang
Jiang, Yinghui
Ran, Ting
Jin, Xurui
Xiao, Xianglu
Lin, Zhimin
Chen, Hongming
Niu, Zhangmin
description Abstract Motivation In recent years, the development of natural language process (NLP) technologies and deep learning hardware has led to significant improvement in large language models (LLMs). The ChatGPT, the state-of-the-art LLM built on GPT-3.5 and GPT-4, shows excellent capabilities in general language understanding and reasoning. Researchers also tested the GPTs on a variety of NLP-related tasks and benchmarks and got excellent results. With exciting performance on daily chat, researchers began to explore the capacity of ChatGPT on expertise that requires professional education for human and we are interested in the biomedical domain. Results To evaluate the performance of ChatGPT on biomedical-related tasks, this article presents a comprehensive benchmark study on the use of ChatGPT for biomedical corpus, including article abstracts, clinical trials description, biomedical questions, and so on. Typical NLP tasks like named entity recognization, relation extraction, sentence similarity, question and answering, and document classification are included. Overall, ChatGPT got a BLURB score of 58.50 while the state-of-the-art model had a score of 84.30. Through a series of experiments, we demonstrated the effectiveness and versatility of ChatGPT in biomedical text understanding, reasoning and generation, and the limitation of ChatGPT build on GPT-3.5. Availability and implementation All the datasets are available from BLURB benchmark https://microsoft.github.io/BLURB/index.html. The prompts are described in the article.
doi_str_mv 10.1093/bioinformatics/btad557
format Article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_10562950</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><oup_id>10.1093/bioinformatics/btad557</oup_id><sourcerecordid>2863303799</sourcerecordid><originalsourceid>FETCH-LOGICAL-c434t-e89ad8cdd0dcdf3bb359e3f4e9c0bc680dfda73aaefad23eedd44c1c5b0489ca3</originalsourceid><addsrcrecordid>eNqNUUtLAzEQDqLYWv0LkqOXtclmnycpRasgKFLPIZvMdqO7Sd1kq_33RlrE3jzNwHyPmfkQuqTkmpKSTStttalt3wmvpZtWXqg0zY_QmLIsj5KC0uM__QidOfdGCElJmp2iEcuzIqaUjtHLzGD48mCc3gCuwMimE_07dn5QW2wNDkYdKC1Fi30A4hUY6INpGAmjcKeNNiv8qX2D543wi-flOTqpRevgYl8n6PXudjm_jx6fFg_z2WMkE5b4CIpSqEIqRZRUNasqlpbA6gRKSSqZFUTVSuRMCKiFihmAUkkiqUwrkhSlFGyCbna666EKK0owvhctX_c6XLDlVmh-ODG64Su74TQ8IS5TEhSu9gq9_RjAed5pJ6FthQE7OB4XGWOE5WUZoNkOKnvrXA_1rw8l_CcRfpgI3ycSiHRHtMP6v5xvTAeZUg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2863303799</pqid></control><display><type>article</type><title>An extensive benchmark study on biomedical text generation and mining with ChatGPT</title><source>DOAJ Directory of Open Access Journals</source><source>Access via Oxford University Press (Open Access Collection)</source><source>EZB-FREE-00999 freely available EZB journals</source><source>PubMed Central</source><source>Alma/SFX Local Collection</source><creator>Chen, Qijie ; Sun, Haotong ; Liu, Haoyang ; Jiang, Yinghui ; Ran, Ting ; Jin, Xurui ; Xiao, Xianglu ; Lin, Zhimin ; Chen, Hongming ; Niu, Zhangmin</creator><contributor>Wren, Jonathan</contributor><creatorcontrib>Chen, Qijie ; Sun, Haotong ; Liu, Haoyang ; Jiang, Yinghui ; Ran, Ting ; Jin, Xurui ; Xiao, Xianglu ; Lin, Zhimin ; Chen, Hongming ; Niu, Zhangmin ; Wren, Jonathan</creatorcontrib><description>Abstract Motivation In recent years, the development of natural language process (NLP) technologies and deep learning hardware has led to significant improvement in large language models (LLMs). The ChatGPT, the state-of-the-art LLM built on GPT-3.5 and GPT-4, shows excellent capabilities in general language understanding and reasoning. Researchers also tested the GPTs on a variety of NLP-related tasks and benchmarks and got excellent results. With exciting performance on daily chat, researchers began to explore the capacity of ChatGPT on expertise that requires professional education for human and we are interested in the biomedical domain. Results To evaluate the performance of ChatGPT on biomedical-related tasks, this article presents a comprehensive benchmark study on the use of ChatGPT for biomedical corpus, including article abstracts, clinical trials description, biomedical questions, and so on. Typical NLP tasks like named entity recognization, relation extraction, sentence similarity, question and answering, and document classification are included. Overall, ChatGPT got a BLURB score of 58.50 while the state-of-the-art model had a score of 84.30. Through a series of experiments, we demonstrated the effectiveness and versatility of ChatGPT in biomedical text understanding, reasoning and generation, and the limitation of ChatGPT build on GPT-3.5. Availability and implementation All the datasets are available from BLURB benchmark https://microsoft.github.io/BLURB/index.html. The prompts are described in the article.</description><identifier>ISSN: 1367-4811</identifier><identifier>ISSN: 1367-4803</identifier><identifier>EISSN: 1367-4811</identifier><identifier>DOI: 10.1093/bioinformatics/btad557</identifier><identifier>PMID: 37682111</identifier><language>eng</language><publisher>Oxford University Press</publisher><subject>Original Paper</subject><ispartof>Bioinformatics (Oxford, England), 2023-09, Vol.39 (9)</ispartof><rights>The Author(s) 2023. Published by Oxford University Press. 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c434t-e89ad8cdd0dcdf3bb359e3f4e9c0bc680dfda73aaefad23eedd44c1c5b0489ca3</citedby><cites>FETCH-LOGICAL-c434t-e89ad8cdd0dcdf3bb359e3f4e9c0bc680dfda73aaefad23eedd44c1c5b0489ca3</cites><orcidid>0000-0002-7009-946X ; 0000-0003-0661-652X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC10562950/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC10562950/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,315,728,781,785,865,886,1605,27929,27930,53796,53798</link.rule.ids></links><search><contributor>Wren, Jonathan</contributor><creatorcontrib>Chen, Qijie</creatorcontrib><creatorcontrib>Sun, Haotong</creatorcontrib><creatorcontrib>Liu, Haoyang</creatorcontrib><creatorcontrib>Jiang, Yinghui</creatorcontrib><creatorcontrib>Ran, Ting</creatorcontrib><creatorcontrib>Jin, Xurui</creatorcontrib><creatorcontrib>Xiao, Xianglu</creatorcontrib><creatorcontrib>Lin, Zhimin</creatorcontrib><creatorcontrib>Chen, Hongming</creatorcontrib><creatorcontrib>Niu, Zhangmin</creatorcontrib><title>An extensive benchmark study on biomedical text generation and mining with ChatGPT</title><title>Bioinformatics (Oxford, England)</title><description>Abstract Motivation In recent years, the development of natural language process (NLP) technologies and deep learning hardware has led to significant improvement in large language models (LLMs). The ChatGPT, the state-of-the-art LLM built on GPT-3.5 and GPT-4, shows excellent capabilities in general language understanding and reasoning. Researchers also tested the GPTs on a variety of NLP-related tasks and benchmarks and got excellent results. With exciting performance on daily chat, researchers began to explore the capacity of ChatGPT on expertise that requires professional education for human and we are interested in the biomedical domain. Results To evaluate the performance of ChatGPT on biomedical-related tasks, this article presents a comprehensive benchmark study on the use of ChatGPT for biomedical corpus, including article abstracts, clinical trials description, biomedical questions, and so on. Typical NLP tasks like named entity recognization, relation extraction, sentence similarity, question and answering, and document classification are included. Overall, ChatGPT got a BLURB score of 58.50 while the state-of-the-art model had a score of 84.30. Through a series of experiments, we demonstrated the effectiveness and versatility of ChatGPT in biomedical text understanding, reasoning and generation, and the limitation of ChatGPT build on GPT-3.5. Availability and implementation All the datasets are available from BLURB benchmark https://microsoft.github.io/BLURB/index.html. The prompts are described in the article.</description><subject>Original Paper</subject><issn>1367-4811</issn><issn>1367-4803</issn><issn>1367-4811</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>TOX</sourceid><recordid>eNqNUUtLAzEQDqLYWv0LkqOXtclmnycpRasgKFLPIZvMdqO7Sd1kq_33RlrE3jzNwHyPmfkQuqTkmpKSTStttalt3wmvpZtWXqg0zY_QmLIsj5KC0uM__QidOfdGCElJmp2iEcuzIqaUjtHLzGD48mCc3gCuwMimE_07dn5QW2wNDkYdKC1Fi30A4hUY6INpGAmjcKeNNiv8qX2D543wi-flOTqpRevgYl8n6PXudjm_jx6fFg_z2WMkE5b4CIpSqEIqRZRUNasqlpbA6gRKSSqZFUTVSuRMCKiFihmAUkkiqUwrkhSlFGyCbna666EKK0owvhctX_c6XLDlVmh-ODG64Su74TQ8IS5TEhSu9gq9_RjAed5pJ6FthQE7OB4XGWOE5WUZoNkOKnvrXA_1rw8l_CcRfpgI3ycSiHRHtMP6v5xvTAeZUg</recordid><startdate>20230902</startdate><enddate>20230902</enddate><creator>Chen, Qijie</creator><creator>Sun, Haotong</creator><creator>Liu, Haoyang</creator><creator>Jiang, Yinghui</creator><creator>Ran, Ting</creator><creator>Jin, Xurui</creator><creator>Xiao, Xianglu</creator><creator>Lin, Zhimin</creator><creator>Chen, Hongming</creator><creator>Niu, Zhangmin</creator><general>Oxford University Press</general><scope>TOX</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0002-7009-946X</orcidid><orcidid>https://orcid.org/0000-0003-0661-652X</orcidid></search><sort><creationdate>20230902</creationdate><title>An extensive benchmark study on biomedical text generation and mining with ChatGPT</title><author>Chen, Qijie ; Sun, Haotong ; Liu, Haoyang ; Jiang, Yinghui ; Ran, Ting ; Jin, Xurui ; Xiao, Xianglu ; Lin, Zhimin ; Chen, Hongming ; Niu, Zhangmin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c434t-e89ad8cdd0dcdf3bb359e3f4e9c0bc680dfda73aaefad23eedd44c1c5b0489ca3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Original Paper</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chen, Qijie</creatorcontrib><creatorcontrib>Sun, Haotong</creatorcontrib><creatorcontrib>Liu, Haoyang</creatorcontrib><creatorcontrib>Jiang, Yinghui</creatorcontrib><creatorcontrib>Ran, Ting</creatorcontrib><creatorcontrib>Jin, Xurui</creatorcontrib><creatorcontrib>Xiao, Xianglu</creatorcontrib><creatorcontrib>Lin, Zhimin</creatorcontrib><creatorcontrib>Chen, Hongming</creatorcontrib><creatorcontrib>Niu, Zhangmin</creatorcontrib><collection>Access via Oxford University Press (Open Access Collection)</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Bioinformatics (Oxford, England)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Qijie</au><au>Sun, Haotong</au><au>Liu, Haoyang</au><au>Jiang, Yinghui</au><au>Ran, Ting</au><au>Jin, Xurui</au><au>Xiao, Xianglu</au><au>Lin, Zhimin</au><au>Chen, Hongming</au><au>Niu, Zhangmin</au><au>Wren, Jonathan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An extensive benchmark study on biomedical text generation and mining with ChatGPT</atitle><jtitle>Bioinformatics (Oxford, England)</jtitle><date>2023-09-02</date><risdate>2023</risdate><volume>39</volume><issue>9</issue><issn>1367-4811</issn><issn>1367-4803</issn><eissn>1367-4811</eissn><abstract>Abstract Motivation In recent years, the development of natural language process (NLP) technologies and deep learning hardware has led to significant improvement in large language models (LLMs). The ChatGPT, the state-of-the-art LLM built on GPT-3.5 and GPT-4, shows excellent capabilities in general language understanding and reasoning. Researchers also tested the GPTs on a variety of NLP-related tasks and benchmarks and got excellent results. With exciting performance on daily chat, researchers began to explore the capacity of ChatGPT on expertise that requires professional education for human and we are interested in the biomedical domain. Results To evaluate the performance of ChatGPT on biomedical-related tasks, this article presents a comprehensive benchmark study on the use of ChatGPT for biomedical corpus, including article abstracts, clinical trials description, biomedical questions, and so on. Typical NLP tasks like named entity recognization, relation extraction, sentence similarity, question and answering, and document classification are included. Overall, ChatGPT got a BLURB score of 58.50 while the state-of-the-art model had a score of 84.30. Through a series of experiments, we demonstrated the effectiveness and versatility of ChatGPT in biomedical text understanding, reasoning and generation, and the limitation of ChatGPT build on GPT-3.5. Availability and implementation All the datasets are available from BLURB benchmark https://microsoft.github.io/BLURB/index.html. The prompts are described in the article.</abstract><pub>Oxford University Press</pub><pmid>37682111</pmid><doi>10.1093/bioinformatics/btad557</doi><orcidid>https://orcid.org/0000-0002-7009-946X</orcidid><orcidid>https://orcid.org/0000-0003-0661-652X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1367-4811
ispartof Bioinformatics (Oxford, England), 2023-09, Vol.39 (9)
issn 1367-4811
1367-4803
1367-4811
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_10562950
source DOAJ Directory of Open Access Journals; Access via Oxford University Press (Open Access Collection); EZB-FREE-00999 freely available EZB journals; PubMed Central; Alma/SFX Local Collection
subjects Original Paper
title An extensive benchmark study on biomedical text generation and mining with ChatGPT
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-12T04%3A42%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20extensive%20benchmark%20study%20on%20biomedical%20text%20generation%20and%20mining%20with%20ChatGPT&rft.jtitle=Bioinformatics%20(Oxford,%20England)&rft.au=Chen,%20Qijie&rft.date=2023-09-02&rft.volume=39&rft.issue=9&rft.issn=1367-4811&rft.eissn=1367-4811&rft_id=info:doi/10.1093/bioinformatics/btad557&rft_dat=%3Cproquest_pubme%3E2863303799%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2863303799&rft_id=info:pmid/37682111&rft_oup_id=10.1093/bioinformatics/btad557&rfr_iscdi=true