Flextron: Many-in-One Flexible Large Language Model
Training modern LLMs is extremely resource intensive, and customizing them for various deployment scenarios characterized by limited compute and memory resources through repeated training is impractical. In this paper, we introduce Flextron, a network architecture and post-training model optimizatio...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Training modern LLMs is extremely resource intensive, and customizing them
for various deployment scenarios characterized by limited compute and memory
resources through repeated training is impractical. In this paper, we introduce
Flextron, a network architecture and post-training model optimization framework
supporting flexible model deployment. The Flextron architecture utilizes a
nested elastic structure to rapidly adapt to specific user-defined latency and
accuracy targets during inference with no additional fine-tuning required. It
is also input-adaptive, and can automatically route tokens through its
sub-networks for improved performance and efficiency. We present a
sample-efficient training method and associated routing algorithms for
systematically transforming an existing trained LLM into a Flextron model. We
evaluate Flextron on the GPT-3 and LLama-2 family of LLMs, and demonstrate
superior performance over multiple end-to-end trained variants and other
state-of-the-art elastic networks, all with a single pretraining run that
consumes a mere 7.63% tokens compared to original pretraining. |
---|---|
DOI: | 10.48550/arxiv.2406.10260 |