Dynamic Language Group-Based MoE: Enhancing Code-Switching Speech Recognition with Hierarchical Routing
The Mixture of Experts (MoE) approach is well-suited for multilingual and code-switching (CS) tasks due to its multi-expert architecture. This work introduces the DLG-MoE, a Dynamic Language Group-based MoE optimized for bilingual and CS scenarios. DLG-MoE operates based on a hierarchical routing me...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The Mixture of Experts (MoE) approach is well-suited for multilingual and
code-switching (CS) tasks due to its multi-expert architecture. This work
introduces the DLG-MoE, a Dynamic Language Group-based MoE optimized for
bilingual and CS scenarios. DLG-MoE operates based on a hierarchical routing
mechanism. First, the language router explicitly models the language and
dispatches the representations to the corresponding language expert groups.
Subsequently, the unsupervised router within each language group implicitly
models attributes beyond language, and coordinates expert routing and
collaboration. The model achieves state-of-the-art (SOTA) performance while
also having unparalleled flexibility. It supports different top-k inference and
streaming capabilities, and can also prune the model parameters to obtain a
monolingual sub-model. The Code will be released. |
---|---|
DOI: | 10.48550/arxiv.2407.18581 |