MoNDE: Mixture of Near-Data Experts for Large-Scale Sparse Models
Mixture-of-Experts (MoE) large language models (LLM) have memory requirements that often exceed the GPU memory capacity, requiring costly parameter movement from secondary memories to the GPU for expert computation. In this work, we present Mixture of Near-Data Experts (MoNDE), a near-data computing...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Mixture-of-Experts (MoE) large language models (LLM) have memory requirements
that often exceed the GPU memory capacity, requiring costly parameter movement
from secondary memories to the GPU for expert computation. In this work, we
present Mixture of Near-Data Experts (MoNDE), a near-data computing solution
that efficiently enables MoE LLM inference. MoNDE reduces the volume of MoE
parameter movement by transferring only the $\textit{hot}$ experts to the GPU,
while computing the remaining $\textit{cold}$ experts inside the host memory
device. By replacing the transfers of massive expert parameters with the ones
of small activations, MoNDE enables far more communication-efficient MoE
inference, thereby resulting in substantial speedups over the existing
parameter offloading frameworks for both encoder and decoder operations. |
---|---|
DOI: | 10.48550/arxiv.2405.18832 |