SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Recently, there has been growing interest in using Large Language Models (LLMs) for scientific research. Numerous benchmarks have been proposed to evaluate the ability of LLMs for scientific research. However, current benchmarks are mostly based on pre-collected objective questions. This design suff...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The codes and data are publicly available on
https://github.com/OpenDFM/SciEval. |
---|---|
DOI: | 10.48550/arxiv.2308.13149 |