AstroMLab 1: Who Wins Astronomy Jeopardy!?
We present a comprehensive evaluation of proprietary and open-weights large language models using the first astronomy-specific benchmarking dataset. This dataset comprises 4,425 multiple-choice questions curated from the Annual Review of Astronomy and Astrophysics, covering a broad range of astrophy...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present a comprehensive evaluation of proprietary and open-weights large
language models using the first astronomy-specific benchmarking dataset. This
dataset comprises 4,425 multiple-choice questions curated from the Annual
Review of Astronomy and Astrophysics, covering a broad range of astrophysical
topics. Our analysis examines model performance across various astronomical
subfields and assesses response calibration, crucial for potential deployment
in research environments. Claude-3.5-Sonnet outperforms competitors by up to
4.6 percentage points, achieving 85.0% accuracy. For proprietary models, we
observed a universal reduction in cost every 3-to-12 months to achieve similar
score in this particular astronomy benchmark. open-weights models have rapidly
improved, with LLaMA-3-70b (80.6%) and Qwen-2-72b (77.7%) now competing with
some of the best proprietary models. We identify performance variations across
topics, with non-English-focused models generally struggling more in
exoplanet-related fields, stellar astrophysics, and instrumentation related
questions. These challenges likely stem from less abundant training data,
limited historical context, and rapid recent developments in these areas. This
pattern is observed across both open-weights and proprietary models, with
regional dependencies evident, highlighting the impact of training data
diversity on model performance in specialized scientific domains.
Top-performing models demonstrate well-calibrated confidence, with correlations
above 0.9 between confidence and correctness, though they tend to be slightly
underconfident. The development for fast, low-cost inference of open-weights
models presents new opportunities for affordable deployment in astronomy. The
rapid progress observed suggests that LLM-driven research in astronomy may
become feasible in the near future. |
---|---|
DOI: | 10.48550/arxiv.2407.11194 |