SciCode: A Research Coding Benchmark Curated by Scientists
Since language models (LMs) now outperform average humans on many challenging tasks, it has become increasingly difficult to develop challenging, high-quality, and realistic evaluations. We address this issue by examining LMs' capabilities to generate code for solving real scientific research p...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Since language models (LMs) now outperform average humans on many challenging
tasks, it has become increasingly difficult to develop challenging,
high-quality, and realistic evaluations. We address this issue by examining
LMs' capabilities to generate code for solving real scientific research
problems. Incorporating input from scientists and AI researchers in 16 diverse
natural science sub-fields, including mathematics, physics, chemistry, biology,
and materials science, we created a scientist-curated coding benchmark,
SciCode. The problems in SciCode naturally factorize into multiple subproblems,
each involving knowledge recall, reasoning, and code synthesis. In total,
SciCode contains 338 subproblems decomposed from 80 challenging main problems.
It offers optional descriptions specifying useful scientific background
information and scientist-annotated gold-standard solutions and test cases for
evaluation. Claude3.5-Sonnet, the best-performing model among those tested, can
solve only 4.6% of the problems in the most realistic setting. We believe that
SciCode demonstrates both contemporary LMs' progress towards becoming helpful
scientific assistants and sheds light on the development and evaluation of
scientific AI in the future. |
---|---|
DOI: | 10.48550/arxiv.2407.13168 |