MedChatZH: A tuning LLM for traditional Chinese medicine consultations

Generative Large Language Models (LLMs) have achieved significant success in various natural language processing tasks, including Question-Answering (QA) and dialogue systems. However, most models are trained on English data and lack strong generalization in providing answers in Chinese. This limita...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers in biology and medicine 2024-04, Vol.172, p.108290, Article 108290
Hauptverfasser: Tan, Yang, Zhang, Zhixing, Li, Mingchen, Pan, Fei, Duan, Hao, Huang, Zijie, Deng, Hua, Yu, Zhuohang, Yang, Chen, Shen, Guoyang, Qi, Peng, Yue, Chengyuan, Liu, Yuxian, Hong, Liang, Yu, Huiqun, Fan, Guisheng, Tang, Yun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Generative Large Language Models (LLMs) have achieved significant success in various natural language processing tasks, including Question-Answering (QA) and dialogue systems. However, most models are trained on English data and lack strong generalization in providing answers in Chinese. This limitation is especially evident in specialized domains like traditional Chinese medical QA, where performance suffers due to the absence of fine-tuning and high-quality datasets. To address this, we introduce MedChatZH, a dialogue model optimized for Chinese medical QA based on transformer decoder with LLaMA architecture. Continued pre-training on a curated corpus of Chinese medical books is followed by fine-tuning with a carefully selected medical instruction dataset, resulting in MedChatZH outperforming several Chinese dialogue baselines on a real-world medical dialogue dataset. Our model, code, and dataset are publicly available on GitHub (https://github.com/tyang816/MedChatZH) to encourage further research in traditional Chinese medicine and LLMs. [Display omitted] •To introduce MedChatZH, an AI system for TCM dialogues, showcasing effective consultation performance.•To develop TCM corpus for pre-training, with refined medical dialogues dataset, ensuring quality.•To demonstrate MedChatZH's superior performance on Chinese medical QA benchmark, outperforming baseline.•To focus on ethical medical practice in MedChatZH's design, enhancing safety and compliance.•To open-source MedChatZH's code, weights and data to foster community collaboration in TCM advancement.
ISSN:0010-4825
1879-0534
1879-0534
DOI:10.1016/j.compbiomed.2024.108290