SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
Large language models (LLMs) show excellent performance but are compute- and memory-intensive. Quantization can reduce memory and accelerate inference. However, existing methods cannot maintain accuracy and hardware efficiency at the same time. We propose SmoothQuant, a training-free, accuracy-prese...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) show excellent performance but are compute- and
memory-intensive. Quantization can reduce memory and accelerate inference.
However, existing methods cannot maintain accuracy and hardware efficiency at
the same time. We propose SmoothQuant, a training-free, accuracy-preserving,
and general-purpose post-training quantization (PTQ) solution to enable 8-bit
weight, 8-bit activation (W8A8) quantization for LLMs. Based on the fact that
weights are easy to quantize while activations are not, SmoothQuant smooths the
activation outliers by offline migrating the quantization difficulty from
activations to weights with a mathematically equivalent transformation.
SmoothQuant enables an INT8 quantization of both weights and activations for
all the matrix multiplications in LLMs, including OPT, BLOOM, GLM, MT-NLG,
Llama-1/2, Falcon, Mistral, and Mixtral models. We demonstrate up to 1.56x
speedup and 2x memory reduction for LLMs with negligible loss in accuracy.
SmoothQuant enables serving 530B LLM within a single node. Our work offers a
turn-key solution that reduces hardware costs and democratizes LLMs. Code is
available at https://github.com/mit-han-lab/smoothquant. |
---|---|
DOI: | 10.48550/arxiv.2211.10438 |