Towards Training A Chinese Large Language Model for Anesthesiology

Medical large language models (LLMs) have gained popularity recently due to their significant practical utility. However, most existing research focuses on general medicine, and there is a need for in-depth study of LLMs in specific fields like anesthesiology. To fill the gap, we introduce Hypnos, a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-03
Hauptverfasser: Wang, Zhonghai, Jiang, Jie, Zhan, Yibing, Zhou, Bohao, Li, Yanhong, Zhang, Chong, Ding, Liang, Jin, Hua, Peng, Jun, Xu, Lin, Liu, Weifeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Wang, Zhonghai
Jiang, Jie
Zhan, Yibing
Zhou, Bohao
Li, Yanhong
Zhang, Chong
Ding, Liang
Jin, Hua
Peng, Jun
Xu, Lin
Liu, Weifeng
description Medical large language models (LLMs) have gained popularity recently due to their significant practical utility. However, most existing research focuses on general medicine, and there is a need for in-depth study of LLMs in specific fields like anesthesiology. To fill the gap, we introduce Hypnos, a Chinese Anesthesia model built upon existing LLMs, e.g., Llama. Hypnos' contributions have three aspects: 1) The data, such as utilizing Self-Instruct, acquired from current LLMs likely includes inaccuracies. Hypnos implements a cross-filtering strategy to improve the data quality. This strategy involves using one LLM to assess the quality of the generated data from another LLM and filtering out the data with low quality. 2) Hypnos employs a general-to-specific training strategy that starts by fine-tuning LLMs using the general medicine data and subsequently improving the fine-tuned LLMs using data specifically from Anesthesiology. The general medical data supplement the medical expertise in Anesthesiology and enhance the effectiveness of Hypnos' generation. 3) We introduce a standardized benchmark for evaluating medical LLM in Anesthesiology. Our benchmark includes both publicly available instances from the Internet and privately obtained cases from the Hospital. Hypnos outperforms other medical LLMs in anesthesiology in metrics, GPT-4, and human evaluation on the benchmark dataset.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2938060002</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2938060002</sourcerecordid><originalsourceid>FETCH-proquest_journals_29380600023</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRwCskvTyxKKVYIKUrMzMvMS1dwVHDOyMxLLU5V8EksSgeReemliUCGb35Kao5CWn6RgiNQuiQjtTgzPyc_vZKHgTUtMac4lRdKczMou7mGOHvoFhTlF5YCVcZn5ZcW5QGl4o0sjS0MzED2GxOnCgBtpTk4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2938060002</pqid></control><display><type>article</type><title>Towards Training A Chinese Large Language Model for Anesthesiology</title><source>Free E- Journals</source><creator>Wang, Zhonghai ; Jiang, Jie ; Zhan, Yibing ; Zhou, Bohao ; Li, Yanhong ; Zhang, Chong ; Ding, Liang ; Jin, Hua ; Peng, Jun ; Xu, Lin ; Liu, Weifeng</creator><creatorcontrib>Wang, Zhonghai ; Jiang, Jie ; Zhan, Yibing ; Zhou, Bohao ; Li, Yanhong ; Zhang, Chong ; Ding, Liang ; Jin, Hua ; Peng, Jun ; Xu, Lin ; Liu, Weifeng</creatorcontrib><description>Medical large language models (LLMs) have gained popularity recently due to their significant practical utility. However, most existing research focuses on general medicine, and there is a need for in-depth study of LLMs in specific fields like anesthesiology. To fill the gap, we introduce Hypnos, a Chinese Anesthesia model built upon existing LLMs, e.g., Llama. Hypnos' contributions have three aspects: 1) The data, such as utilizing Self-Instruct, acquired from current LLMs likely includes inaccuracies. Hypnos implements a cross-filtering strategy to improve the data quality. This strategy involves using one LLM to assess the quality of the generated data from another LLM and filtering out the data with low quality. 2) Hypnos employs a general-to-specific training strategy that starts by fine-tuning LLMs using the general medicine data and subsequently improving the fine-tuned LLMs using data specifically from Anesthesiology. The general medical data supplement the medical expertise in Anesthesiology and enhance the effectiveness of Hypnos' generation. 3) We introduce a standardized benchmark for evaluating medical LLM in Anesthesiology. Our benchmark includes both publicly available instances from the Internet and privately obtained cases from the Hospital. Hypnos outperforms other medical LLMs in anesthesiology in metrics, GPT-4, and human evaluation on the benchmark dataset.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Anesthesia ; Anesthesiology ; Benchmarks ; Filtration ; Large language models ; Quality assessment ; Training</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Wang, Zhonghai</creatorcontrib><creatorcontrib>Jiang, Jie</creatorcontrib><creatorcontrib>Zhan, Yibing</creatorcontrib><creatorcontrib>Zhou, Bohao</creatorcontrib><creatorcontrib>Li, Yanhong</creatorcontrib><creatorcontrib>Zhang, Chong</creatorcontrib><creatorcontrib>Ding, Liang</creatorcontrib><creatorcontrib>Jin, Hua</creatorcontrib><creatorcontrib>Peng, Jun</creatorcontrib><creatorcontrib>Xu, Lin</creatorcontrib><creatorcontrib>Liu, Weifeng</creatorcontrib><title>Towards Training A Chinese Large Language Model for Anesthesiology</title><title>arXiv.org</title><description>Medical large language models (LLMs) have gained popularity recently due to their significant practical utility. However, most existing research focuses on general medicine, and there is a need for in-depth study of LLMs in specific fields like anesthesiology. To fill the gap, we introduce Hypnos, a Chinese Anesthesia model built upon existing LLMs, e.g., Llama. Hypnos' contributions have three aspects: 1) The data, such as utilizing Self-Instruct, acquired from current LLMs likely includes inaccuracies. Hypnos implements a cross-filtering strategy to improve the data quality. This strategy involves using one LLM to assess the quality of the generated data from another LLM and filtering out the data with low quality. 2) Hypnos employs a general-to-specific training strategy that starts by fine-tuning LLMs using the general medicine data and subsequently improving the fine-tuned LLMs using data specifically from Anesthesiology. The general medical data supplement the medical expertise in Anesthesiology and enhance the effectiveness of Hypnos' generation. 3) We introduce a standardized benchmark for evaluating medical LLM in Anesthesiology. Our benchmark includes both publicly available instances from the Internet and privately obtained cases from the Hospital. Hypnos outperforms other medical LLMs in anesthesiology in metrics, GPT-4, and human evaluation on the benchmark dataset.</description><subject>Anesthesia</subject><subject>Anesthesiology</subject><subject>Benchmarks</subject><subject>Filtration</subject><subject>Large language models</subject><subject>Quality assessment</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRwCskvTyxKKVYIKUrMzMvMS1dwVHDOyMxLLU5V8EksSgeReemliUCGb35Kao5CWn6RgiNQuiQjtTgzPyc_vZKHgTUtMac4lRdKczMou7mGOHvoFhTlF5YCVcZn5ZcW5QGl4o0sjS0MzED2GxOnCgBtpTk4</recordid><startdate>20240305</startdate><enddate>20240305</enddate><creator>Wang, Zhonghai</creator><creator>Jiang, Jie</creator><creator>Zhan, Yibing</creator><creator>Zhou, Bohao</creator><creator>Li, Yanhong</creator><creator>Zhang, Chong</creator><creator>Ding, Liang</creator><creator>Jin, Hua</creator><creator>Peng, Jun</creator><creator>Xu, Lin</creator><creator>Liu, Weifeng</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240305</creationdate><title>Towards Training A Chinese Large Language Model for Anesthesiology</title><author>Wang, Zhonghai ; Jiang, Jie ; Zhan, Yibing ; Zhou, Bohao ; Li, Yanhong ; Zhang, Chong ; Ding, Liang ; Jin, Hua ; Peng, Jun ; Xu, Lin ; Liu, Weifeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29380600023</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Anesthesia</topic><topic>Anesthesiology</topic><topic>Benchmarks</topic><topic>Filtration</topic><topic>Large language models</topic><topic>Quality assessment</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Zhonghai</creatorcontrib><creatorcontrib>Jiang, Jie</creatorcontrib><creatorcontrib>Zhan, Yibing</creatorcontrib><creatorcontrib>Zhou, Bohao</creatorcontrib><creatorcontrib>Li, Yanhong</creatorcontrib><creatorcontrib>Zhang, Chong</creatorcontrib><creatorcontrib>Ding, Liang</creatorcontrib><creatorcontrib>Jin, Hua</creatorcontrib><creatorcontrib>Peng, Jun</creatorcontrib><creatorcontrib>Xu, Lin</creatorcontrib><creatorcontrib>Liu, Weifeng</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Zhonghai</au><au>Jiang, Jie</au><au>Zhan, Yibing</au><au>Zhou, Bohao</au><au>Li, Yanhong</au><au>Zhang, Chong</au><au>Ding, Liang</au><au>Jin, Hua</au><au>Peng, Jun</au><au>Xu, Lin</au><au>Liu, Weifeng</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Towards Training A Chinese Large Language Model for Anesthesiology</atitle><jtitle>arXiv.org</jtitle><date>2024-03-05</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Medical large language models (LLMs) have gained popularity recently due to their significant practical utility. However, most existing research focuses on general medicine, and there is a need for in-depth study of LLMs in specific fields like anesthesiology. To fill the gap, we introduce Hypnos, a Chinese Anesthesia model built upon existing LLMs, e.g., Llama. Hypnos' contributions have three aspects: 1) The data, such as utilizing Self-Instruct, acquired from current LLMs likely includes inaccuracies. Hypnos implements a cross-filtering strategy to improve the data quality. This strategy involves using one LLM to assess the quality of the generated data from another LLM and filtering out the data with low quality. 2) Hypnos employs a general-to-specific training strategy that starts by fine-tuning LLMs using the general medicine data and subsequently improving the fine-tuned LLMs using data specifically from Anesthesiology. The general medical data supplement the medical expertise in Anesthesiology and enhance the effectiveness of Hypnos' generation. 3) We introduce a standardized benchmark for evaluating medical LLM in Anesthesiology. Our benchmark includes both publicly available instances from the Internet and privately obtained cases from the Hospital. Hypnos outperforms other medical LLMs in anesthesiology in metrics, GPT-4, and human evaluation on the benchmark dataset.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-03
issn 2331-8422
language eng
recordid cdi_proquest_journals_2938060002
source Free E- Journals
subjects Anesthesia
Anesthesiology
Benchmarks
Filtration
Large language models
Quality assessment
Training
title Towards Training A Chinese Large Language Model for Anesthesiology
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T12%3A43%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Towards%20Training%20A%20Chinese%20Large%20Language%20Model%20for%20Anesthesiology&rft.jtitle=arXiv.org&rft.au=Wang,%20Zhonghai&rft.date=2024-03-05&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2938060002%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2938060002&rft_id=info:pmid/&rfr_iscdi=true