DrBenchmark: A Large Language Understanding Evaluation Benchmark for French Biomedical Domain
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) The biomedical domain has sparked a significant interest in the field of Natural Language Processing (NLP), which has seen substantial advancements with pre-train...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Proceedings of the 2024 Joint International Conference on
Computational Linguistics, Language Resources and Evaluation (LREC-COLING
2024) The biomedical domain has sparked a significant interest in the field of
Natural Language Processing (NLP), which has seen substantial advancements with
pre-trained language models (PLMs). However, comparing these models has proven
challenging due to variations in evaluation protocols across different models.
A fair solution is to aggregate diverse downstream tasks into a benchmark,
allowing for the assessment of intrinsic PLMs qualities from various
perspectives. Although still limited to few languages, this initiative has been
undertaken in the biomedical field, notably English and Chinese. This
limitation hampers the evaluation of the latest French biomedical models, as
they are either assessed on a minimal number of tasks with non-standardized
protocols or evaluated using general downstream tasks. To bridge this research
gap and account for the unique sensitivities of French, we present the
first-ever publicly available French biomedical language understanding
benchmark called DrBenchmark. It encompasses 20 diversified tasks, including
named-entity recognition, part-of-speech tagging, question-answering, semantic
textual similarity, and classification. We evaluate 8 state-of-the-art
pre-trained masked language models (MLMs) on general and biomedical-specific
data, as well as English specific MLMs to assess their cross-lingual
capabilities. Our experiments reveal that no single model excels across all
tasks, while generalist models are sometimes still competitive. |
---|---|
DOI: | 10.48550/arxiv.2402.13432 |