eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models
Since Large Language Models or LLMs have demonstrated high-quality performance on many complex language tasks, there is a great interest in bringing these LLMs to mobile devices for faster responses and better privacy protection. However, the size of LLMs (i.e., billions of parameters) requires high...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Since Large Language Models or LLMs have demonstrated high-quality
performance on many complex language tasks, there is a great interest in
bringing these LLMs to mobile devices for faster responses and better privacy
protection. However, the size of LLMs (i.e., billions of parameters) requires
highly effective compression to fit into storage-limited devices. Among many
compression techniques, weight-clustering, a form of non-linear quantization,
is one of the leading candidates for LLM compression, and supported by modern
smartphones. Yet, its training overhead is prohibitively significant for LLM
fine-tuning. Especially, Differentiable KMeans Clustering, or DKM, has shown
the state-of-the-art trade-off between compression ratio and accuracy
regression, but its large memory complexity makes it nearly impossible to apply
to train-time LLM compression. In this paper, we propose a memory-efficient DKM
implementation, eDKM powered by novel techniques to reduce the memory footprint
of DKM by orders of magnitudes. For a given tensor to be saved on CPU for the
backward pass of DKM, we compressed the tensor by applying uniquification and
sharding after checking if there is no duplicated tensor previously copied to
CPU. Our experimental results demonstrate that \prjname can fine-tune and
compress a pretrained LLaMA 7B model from 12.6 GB to 2.5 GB (3bit/weight) with
the Alpaca dataset by reducing the train-time memory footprint of a decoder
layer by 130$\times$, while delivering good accuracy on broader LLM benchmarks
(i.e., 77.7% for PIQA, 66.1% for Winograde, and so on). |
---|---|
DOI: | 10.48550/arxiv.2309.00964 |