52B to 1T: Lessons Learned via Tele-FLM Series
Large Language Models (LLMs) represent a significant stride toward Artificial General Intelligence. As scaling laws underscore the potential of increasing model sizes, the academic community has intensified its investigations into LLMs with capacities exceeding 50 billion parameters. This technical...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) represent a significant stride toward Artificial
General Intelligence. As scaling laws underscore the potential of increasing
model sizes, the academic community has intensified its investigations into
LLMs with capacities exceeding 50 billion parameters. This technical report
builds on our prior work with Tele-FLM (also known as FLM-2), a publicly
available 52-billion-parameter model. We delve into two primary areas: we first
discuss our observation of Supervised Fine-tuning (SFT) on Tele-FLM-52B, which
supports the "less is more" approach for SFT data construction; second, we
demonstrate our experiments and analyses on the best practices for
progressively growing a model from 52 billion to 102 billion, and subsequently
to 1 trillion parameters. We will open-source a 1T model checkpoint, namely
Tele-FLM-1T, to advance further training and research. |
---|---|
DOI: | 10.48550/arxiv.2407.02783 |