Harnessing the Power of BERT in the Turkish Clinical Domain: Pretraining Approaches for Limited Data Scenarios
In recent years, major advancements in natural language processing (NLP) have been driven by the emergence of large language models (LLMs), which have significantly revolutionized research and development within the field. Building upon this progress, our study delves into the effects of various pre...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In recent years, major advancements in natural language processing (NLP) have
been driven by the emergence of large language models (LLMs), which have
significantly revolutionized research and development within the field.
Building upon this progress, our study delves into the effects of various
pre-training methodologies on Turkish clinical language models' performance in
a multi-label classification task involving radiology reports, with a focus on
addressing the challenges posed by limited language resources. Additionally, we
evaluated the simultaneous pretraining approach by utilizing limited clinical
task data for the first time. We developed four models, including
TurkRadBERT-task v1, TurkRadBERT-task v2, TurkRadBERT-sim v1, and
TurkRadBERT-sim v2. Our findings indicate that the general Turkish BERT model
(BERTurk) and TurkRadBERT-task v1, both of which utilize knowledge from a
substantial general-domain corpus, demonstrate the best overall performance.
Although the task-adaptive pre-training approach has the potential to capture
domain-specific patterns, it is constrained by the limited task-specific corpus
and may be susceptible to overfitting. Furthermore, our results underscore the
significance of domain-specific vocabulary during pre-training for enhancing
model performance. Ultimately, we observe that the combination of
general-domain knowledge and task-specific fine-tuning is essential for
achieving optimal performance across a range of categories. This study offers
valuable insights for developing effective Turkish clinical language models and
can guide future research on pre-training techniques for other low-resource
languages within the clinical domain. |
---|---|
DOI: | 10.48550/arxiv.2305.03788 |