BianCang: A Traditional Chinese Medicine Large Language Model
The rise of large language models (LLMs) has driven significant progress in medical applications, including traditional Chinese medicine (TCM). However, current medical LLMs struggle with TCM diagnosis and syndrome differentiation due to substantial differences between TCM and modern medical theory,...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The rise of large language models (LLMs) has driven significant progress in
medical applications, including traditional Chinese medicine (TCM). However,
current medical LLMs struggle with TCM diagnosis and syndrome differentiation
due to substantial differences between TCM and modern medical theory, and the
scarcity of specialized, high-quality corpora. This paper addresses these
challenges by proposing BianCang, a TCM-specific LLM, using a two-stage
training process that first injects domain-specific knowledge and then aligns
it through targeted stimulation. To enhance diagnostic and differentiation
capabilities, we constructed pre-training corpora, instruction-aligned datasets
based on real hospital records, and the ChP-TCM dataset derived from the
Pharmacopoeia of the People's Republic of China. We compiled extensive TCM and
medical corpora for continuous pre-training and supervised fine-tuning,
building a comprehensive dataset to refine the model's understanding of TCM.
Evaluations across 11 test sets involving 29 models and 4 tasks demonstrate the
effectiveness of BianCang, offering valuable insights for future research.
Code, datasets, and models are available at
https://github.com/QLU-NLP/BianCang. |
---|---|
DOI: | 10.48550/arxiv.2411.11027 |