ClimateGPT: Towards AI Synthesizing Interdisciplinary Research on Climate Change
This paper introduces ClimateGPT, a model family of domain-specific large language models that synthesize interdisciplinary research on climate change. We trained two 7B models from scratch on a science-oriented dataset of 300B tokens. For the first model, the 4.2B domain-specific tokens were includ...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper introduces ClimateGPT, a model family of domain-specific large
language models that synthesize interdisciplinary research on climate change.
We trained two 7B models from scratch on a science-oriented dataset of 300B
tokens. For the first model, the 4.2B domain-specific tokens were included
during pre-training and the second was adapted to the climate domain after
pre-training. Additionally, ClimateGPT-7B, 13B and 70B are continuously
pre-trained from Llama~2 on a domain-specific dataset of 4.2B tokens. Each
model is instruction fine-tuned on a high-quality and human-generated
domain-specific dataset that has been created in close cooperation with climate
scientists. To reduce the number of hallucinations, we optimize the model for
retrieval augmentation and propose a hierarchical retrieval strategy. To
increase the accessibility of our model to non-English speakers, we propose to
make use of cascaded machine translation and show that this approach can
perform comparably to natively multilingual models while being easier to scale
to a large number of languages. Further, to address the intrinsic
interdisciplinary aspect of climate change we consider different research
perspectives. Therefore, the model can produce in-depth answers focusing on
different perspectives in addition to an overall answer. We propose a suite of
automatic climate-specific benchmarks to evaluate LLMs. On these benchmarks,
ClimateGPT-7B performs on par with the ten times larger Llama-2-70B Chat model
while not degrading results on general domain benchmarks. Our human evaluation
confirms the trends we saw in our benchmarks. All models were trained and
evaluated using renewable energy and are released publicly. |
---|---|
DOI: | 10.48550/arxiv.2401.09646 |