PromptCBLUE: A Chinese Prompt Tuning Benchmark for the Medical Domain

Biomedical language understanding benchmarks are the driving forces for artificial intelligence applications with large language model (LLM) back-ends. However, most current benchmarks: (a) are limited to English which makes it challenging to replicate many of the successes in English for other lang...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-10
Hauptverfasser: Zhu, Wei, Wang, Xiaoling, Zheng, Huanran, Chen, Mosha, Tang, Buzhou
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Zhu, Wei
Wang, Xiaoling
Zheng, Huanran
Chen, Mosha
Tang, Buzhou
description Biomedical language understanding benchmarks are the driving forces for artificial intelligence applications with large language model (LLM) back-ends. However, most current benchmarks: (a) are limited to English which makes it challenging to replicate many of the successes in English for other languages, or (b) focus on knowledge probing of LLMs and neglect to evaluate how LLMs apply these knowledge to perform on a wide range of bio-medical tasks, or (c) have become a publicly available corpus and are leaked to LLMs during pre-training. To facilitate the research in medical LLMs, we re-build the Chinese Biomedical Language Understanding Evaluation (CBLUE) benchmark into a large scale prompt-tuning benchmark, PromptCBLUE. Our benchmark is a suitable test-bed and an online platform for evaluating Chinese LLMs' multi-task capabilities on a wide range bio-medical tasks including medical entity recognition, medical text classification, medical natural language inference, medical dialogue understanding and medical content/dialogue generation. To establish evaluation on these tasks, we have experimented and report the results with the current 9 Chinese LLMs fine-tuned with differtent fine-tuning techniques.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2881053096</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2881053096</sourcerecordid><originalsourceid>FETCH-proquest_journals_28810530963</originalsourceid><addsrcrecordid>eNqNikELgjAYQEcQJOV_-KCzMLdm1i3N6FDgwc4y7DNnutmm_7-gfkCnB--9GfEY52EQbxhbEN-5llLKoi0Tgnsky63phzFNLrdsDwdIG6XRIXw1FJNW-gEJ6qrppX1CbSyMDcIV76qSHRxNL5VekXktO4f-j0uyPmVFeg4Ga14TurFszWT1J5UsjkMqON1F_L_rDbkEOXY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2881053096</pqid></control><display><type>article</type><title>PromptCBLUE: A Chinese Prompt Tuning Benchmark for the Medical Domain</title><source>Freely Accessible Journals</source><creator>Zhu, Wei ; Wang, Xiaoling ; Zheng, Huanran ; Chen, Mosha ; Tang, Buzhou</creator><creatorcontrib>Zhu, Wei ; Wang, Xiaoling ; Zheng, Huanran ; Chen, Mosha ; Tang, Buzhou</creatorcontrib><description>Biomedical language understanding benchmarks are the driving forces for artificial intelligence applications with large language model (LLM) back-ends. However, most current benchmarks: (a) are limited to English which makes it challenging to replicate many of the successes in English for other languages, or (b) focus on knowledge probing of LLMs and neglect to evaluate how LLMs apply these knowledge to perform on a wide range of bio-medical tasks, or (c) have become a publicly available corpus and are leaked to LLMs during pre-training. To facilitate the research in medical LLMs, we re-build the Chinese Biomedical Language Understanding Evaluation (CBLUE) benchmark into a large scale prompt-tuning benchmark, PromptCBLUE. Our benchmark is a suitable test-bed and an online platform for evaluating Chinese LLMs' multi-task capabilities on a wide range bio-medical tasks including medical entity recognition, medical text classification, medical natural language inference, medical dialogue understanding and medical content/dialogue generation. To establish evaluation on these tasks, we have experimented and report the results with the current 9 Chinese LLMs fine-tuned with differtent fine-tuning techniques.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial intelligence ; Benchmarks ; English language ; Large language models ; Medical research ; Natural language processing</subject><ispartof>arXiv.org, 2023-10</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Zhu, Wei</creatorcontrib><creatorcontrib>Wang, Xiaoling</creatorcontrib><creatorcontrib>Zheng, Huanran</creatorcontrib><creatorcontrib>Chen, Mosha</creatorcontrib><creatorcontrib>Tang, Buzhou</creatorcontrib><title>PromptCBLUE: A Chinese Prompt Tuning Benchmark for the Medical Domain</title><title>arXiv.org</title><description>Biomedical language understanding benchmarks are the driving forces for artificial intelligence applications with large language model (LLM) back-ends. However, most current benchmarks: (a) are limited to English which makes it challenging to replicate many of the successes in English for other languages, or (b) focus on knowledge probing of LLMs and neglect to evaluate how LLMs apply these knowledge to perform on a wide range of bio-medical tasks, or (c) have become a publicly available corpus and are leaked to LLMs during pre-training. To facilitate the research in medical LLMs, we re-build the Chinese Biomedical Language Understanding Evaluation (CBLUE) benchmark into a large scale prompt-tuning benchmark, PromptCBLUE. Our benchmark is a suitable test-bed and an online platform for evaluating Chinese LLMs' multi-task capabilities on a wide range bio-medical tasks including medical entity recognition, medical text classification, medical natural language inference, medical dialogue understanding and medical content/dialogue generation. To establish evaluation on these tasks, we have experimented and report the results with the current 9 Chinese LLMs fine-tuned with differtent fine-tuning techniques.</description><subject>Artificial intelligence</subject><subject>Benchmarks</subject><subject>English language</subject><subject>Large language models</subject><subject>Medical research</subject><subject>Natural language processing</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNikELgjAYQEcQJOV_-KCzMLdm1i3N6FDgwc4y7DNnutmm_7-gfkCnB--9GfEY52EQbxhbEN-5llLKoi0Tgnsky63phzFNLrdsDwdIG6XRIXw1FJNW-gEJ6qrppX1CbSyMDcIV76qSHRxNL5VekXktO4f-j0uyPmVFeg4Ga14TurFszWT1J5UsjkMqON1F_L_rDbkEOXY</recordid><startdate>20231022</startdate><enddate>20231022</enddate><creator>Zhu, Wei</creator><creator>Wang, Xiaoling</creator><creator>Zheng, Huanran</creator><creator>Chen, Mosha</creator><creator>Tang, Buzhou</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231022</creationdate><title>PromptCBLUE: A Chinese Prompt Tuning Benchmark for the Medical Domain</title><author>Zhu, Wei ; Wang, Xiaoling ; Zheng, Huanran ; Chen, Mosha ; Tang, Buzhou</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28810530963</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial intelligence</topic><topic>Benchmarks</topic><topic>English language</topic><topic>Large language models</topic><topic>Medical research</topic><topic>Natural language processing</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhu, Wei</creatorcontrib><creatorcontrib>Wang, Xiaoling</creatorcontrib><creatorcontrib>Zheng, Huanran</creatorcontrib><creatorcontrib>Chen, Mosha</creatorcontrib><creatorcontrib>Tang, Buzhou</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhu, Wei</au><au>Wang, Xiaoling</au><au>Zheng, Huanran</au><au>Chen, Mosha</au><au>Tang, Buzhou</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>PromptCBLUE: A Chinese Prompt Tuning Benchmark for the Medical Domain</atitle><jtitle>arXiv.org</jtitle><date>2023-10-22</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Biomedical language understanding benchmarks are the driving forces for artificial intelligence applications with large language model (LLM) back-ends. However, most current benchmarks: (a) are limited to English which makes it challenging to replicate many of the successes in English for other languages, or (b) focus on knowledge probing of LLMs and neglect to evaluate how LLMs apply these knowledge to perform on a wide range of bio-medical tasks, or (c) have become a publicly available corpus and are leaked to LLMs during pre-training. To facilitate the research in medical LLMs, we re-build the Chinese Biomedical Language Understanding Evaluation (CBLUE) benchmark into a large scale prompt-tuning benchmark, PromptCBLUE. Our benchmark is a suitable test-bed and an online platform for evaluating Chinese LLMs' multi-task capabilities on a wide range bio-medical tasks including medical entity recognition, medical text classification, medical natural language inference, medical dialogue understanding and medical content/dialogue generation. To establish evaluation on these tasks, we have experimented and report the results with the current 9 Chinese LLMs fine-tuned with differtent fine-tuning techniques.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-10
issn 2331-8422
language eng
recordid cdi_proquest_journals_2881053096
source Freely Accessible Journals
subjects Artificial intelligence
Benchmarks
English language
Large language models
Medical research
Natural language processing
title PromptCBLUE: A Chinese Prompt Tuning Benchmark for the Medical Domain
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T20%3A51%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=PromptCBLUE:%20A%20Chinese%20Prompt%20Tuning%20Benchmark%20for%20the%20Medical%20Domain&rft.jtitle=arXiv.org&rft.au=Zhu,%20Wei&rft.date=2023-10-22&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2881053096%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2881053096&rft_id=info:pmid/&rfr_iscdi=true