OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models
To help the open-source community have a better understanding of Mixture-of-Experts (MoE) based large language models (LLMs), we train and release OpenMoE, a series of fully open-sourced and reproducible decoder-only MoE LLMs, ranging from 650M to 34B parameters and trained on up to over 1T tokens....
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | To help the open-source community have a better understanding of
Mixture-of-Experts (MoE) based large language models (LLMs), we train and
release OpenMoE, a series of fully open-sourced and reproducible decoder-only
MoE LLMs, ranging from 650M to 34B parameters and trained on up to over 1T
tokens. Our investigation confirms that MoE-based LLMs can offer a more
favorable cost-effectiveness trade-off than dense LLMs, highlighting the
potential effectiveness for future LLM development.
One more important contribution of this study is an in-depth analysis of the
routing mechanisms within our OpenMoE models, leading to three significant
findings: Context-Independent Specialization, Early Routing Learning, and
Drop-towards-the-End. We discovered that routing decisions in MoE models are
predominantly based on token IDs, with minimal context relevance. The
token-to-expert assignments are determined early in the pre-training phase and
remain largely unchanged. This imperfect routing can result in performance
degradation, particularly in sequential tasks like multi-turn conversations,
where tokens appearing later in a sequence are more likely to be dropped.
Finally, we rethink our design based on the above-mentioned observations and
analysis. To facilitate future MoE LLM development, we propose potential
strategies for mitigating the issues we found and further improving
off-the-shelf MoE LLM designs. |
---|---|
DOI: | 10.48550/arxiv.2402.01739 |