Expert-Token Resonance: Redefining MoE Routing through Affinity-Driven Active Selection

Mixture-of-Experts (MoE) architectures have emerged as a paradigm-shifting approach for large language models (LLMs), offering unprecedented computational efficiency. However, these architectures grapple with challenges of token distribution imbalance and expert homogenization, impeding optimal sema...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-08
Hauptverfasser: Li, Jing, Sun, Zhijie, Lin, Dachao, He, Xuan, Lin, Yi, Zheng, Binfan, Zeng, Li, Zhao, Rongqian, Chen, Xin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Mixture-of-Experts (MoE) architectures have emerged as a paradigm-shifting approach for large language models (LLMs), offering unprecedented computational efficiency. However, these architectures grapple with challenges of token distribution imbalance and expert homogenization, impeding optimal semantic generalization. We introduce a novel framework that redefines MoE routing through affinity-driven active selection. The innovations for the framework encompass: (1) A rigorous formulation of expert-token affinity metrics. (2) An adaptive bidirectional selection mechanism leveraging resonance between experts and tokens. (3) Theoretical derivation and experimental evidence of reduced expert capacity bounds under dynamic token distribution evolution. It is also integrated with orthogonal feature extraction module and an optimized loss function for expert localization. Our theoretical analysis demonstrates that this approach mitigates expert homogenization while enabling substantial capacity boundary reduction. Experimental validation corroborates these findings: it achieves a 40% reduction in token processed by each expert without compromising model convergence or efficacy. When coupled with communication optimizations, the training efficiency improvements of 5.4% to 46.6% can be observed. After supervised fine-tuning, it exhibits performance gains of 9.7% to 14.1% across GDAD, C-Eval, and TeleQnA benchmarks.
ISSN:2331-8422