Efficient Training of Large Language Models on Distributed Infrastructures: A Survey
Large Language Models (LLMs) like GPT and LLaMA are revolutionizing the AI industry with their sophisticated capabilities. Training these models requires vast GPU clusters and significant computing time, posing major challenges in terms of scalability, efficiency, and reliability. This survey explor...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) like GPT and LLaMA are revolutionizing the AI
industry with their sophisticated capabilities. Training these models requires
vast GPU clusters and significant computing time, posing major challenges in
terms of scalability, efficiency, and reliability. This survey explores recent
advancements in training systems for LLMs, including innovations in training
infrastructure with AI accelerators, networking, storage, and scheduling.
Additionally, the survey covers parallelism strategies, as well as
optimizations for computation, communication, and memory in distributed LLM
training. It also includes approaches of maintaining system reliability over
extended training periods. By examining current innovations and future
directions, this survey aims to provide valuable insights towards improving LLM
training systems and tackling ongoing challenges. Furthermore, traditional
digital circuit-based computing systems face significant constraints in meeting
the computational demands of LLMs, highlighting the need for innovative
solutions such as optical computing and optical networks. |
---|---|
DOI: | 10.48550/arxiv.2407.20018 |