CamemBERT-bio: Leveraging Continual Pre-training for Cost-Effective Models on French Biomedical Data
Clinical data in hospitals are increasingly accessible for research through clinical data warehouses. However these documents are unstructured and it is therefore necessary to extract information from medical reports to conduct clinical studies. Transfer learning with BERT-like models such as CamemB...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Clinical data in hospitals are increasingly accessible for research through
clinical data warehouses. However these documents are unstructured and it is
therefore necessary to extract information from medical reports to conduct
clinical studies. Transfer learning with BERT-like models such as CamemBERT has
allowed major advances for French, especially for named entity recognition.
However, these models are trained for plain language and are less efficient on
biomedical data. Addressing this gap, we introduce CamemBERT-bio, a dedicated
French biomedical model derived from a new public French biomedical dataset.
Through continual pre-training of the original CamemBERT, CamemBERT-bio
achieves an improvement of 2.54 points of F1-score on average across various
biomedical named entity recognition tasks, reinforcing the potential of
continual pre-training as an equally proficient yet less computationally
intensive alternative to training from scratch. Additionally, we highlight the
importance of using a standard evaluation protocol that provides a clear view
of the current state-of-the-art for French biomedical models. |
---|---|
DOI: | 10.48550/arxiv.2306.15550 |