Towards Efficient Large Language Models for Scientific Text: A Review
Large language models (LLMs) have ushered in a new era for processing complex information in various fields, including science. The increasing amount of scientific literature allows these models to acquire and understand scientific knowledge effectively, thus improving their performance in a wide ra...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) have ushered in a new era for processing complex
information in various fields, including science. The increasing amount of
scientific literature allows these models to acquire and understand scientific
knowledge effectively, thus improving their performance in a wide range of
tasks. Due to the power of LLMs, they require extremely expensive computational
resources, intense amounts of data, and training time. Therefore, in recent
years, researchers have proposed various methodologies to make scientific LLMs
more affordable. The most well-known approaches align in two directions. It can
be either focusing on the size of the models or enhancing the quality of data.
To date, a comprehensive review of these two families of methods has not yet
been undertaken. In this paper, we (I) summarize the current advances in the
emerging abilities of LLMs into more accessible AI solutions for science, and
(II) investigate the challenges and opportunities of developing affordable
solutions for scientific domains using LLMs. |
---|---|
DOI: | 10.48550/arxiv.2408.10729 |