FlexiBit: Fully Flexible Precision Bit-parallel Accelerator Architecture for Arbitrary Mixed Precision AI
Recent research has shown that large language models (LLMs) can utilize low-precision floating point (FP) quantization to deliver high efficiency while maintaining original model accuracy. In particular, recent works have shown the effectiveness of non-power-of-two precisions, such as FP6 and FP5, a...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent research has shown that large language models (LLMs) can utilize
low-precision floating point (FP) quantization to deliver high efficiency while
maintaining original model accuracy. In particular, recent works have shown the
effectiveness of non-power-of-two precisions, such as FP6 and FP5, and diverse
sensitivity to low-precision arithmetic of LLM layers, which motivates mixed
precision arithmetic including non-power-of-two precisions in LLMs. Although
low-precision algorithmically leads to low computational overheads, such
benefits cannot be fully exploited due to hardware constraints that support a
limited set of power-of-two precisions (e.g., FP8, 16, 32, and 64 in NVIDIA
H100 Tensor Core). In addition, the hardware compute units are designed to
support standard formats (e.g., E4M3 and E5M2 for FP8). Such practices require
re-designing the hardware whenever new precision and format emerge, which leads
to high hardware replacement costs to exploit the benefits of new precisions
and formats. Therefore, in this paper, we propose a new accelerator
architecture, FlexiBit, which efficiently supports FP and INT arithmetic in
arbitrary precisions and formats. Unlike previous bit-serial designs, which
also provide flexibility but at the cost of performance due to its bit-wise
temporal processing nature, FlexiBit's architecture enables bit-parallel
processing of any precision and format without compute unit underutilization.
FlexiBit's new capability to exploit non-power of two precision and format led
to 1.66x and 1.62x higher performance per area on GPT-3 in FP6 targeting a
cloud-scale accelerator, compared to a Tensor Core-like architecture and a
state-of-the-art bit-parallel flexible precision accelerator, BitFusion,
respectively. Also, the bit-parallel nature of FlexiBit's architecture led to
3.9x higher performance/area compared to a state-of-the-art bit-serial
architecture. |
---|---|
DOI: | 10.48550/arxiv.2411.18065 |