Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference

Sparse Mixture-of-Experts (MoE) has been a successful approach for scaling multilingual translation models to billions of parameters without a proportional increase in training computation. However, MoE models are prohibitively large and practitioners often resort to methods such as distillation for...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kudugunta, Sneha, Huang, Yanping, Bapna, Ankur, Krikun, Maxim, Lepikhin, Dmitry, Luong, Minh-Thang, Firat, Orhan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!