MoE-CAP: Cost-Accuracy-Performance Benchmarking for Mixture-of-Experts Systems

The sparse Mixture-of-Experts (MoE) architecture is increasingly favored for scaling Large Language Models (LLMs) efficiently; however, MoE systems rely on heterogeneous compute and memory resources. These factors collectively influence the system's Cost, Accuracy, and Performance (CAP), creati...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Fu, Yao, Jiang, Yinsicheng, Huang, Yeqi, Nie, Ping, Lu, Zhan, Xue, Leyang, He, Congjie, Sit, Man-Kit, Xue, Jilong, Dong, Li, Miao, Ziming, Zou, Kai, Ponti, Edoardo, Mai, Luo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!