Enhancing Code-Switching Speech Recognition with LID-Based Collaborative Mixture of Experts Model
Due to the inherent difficulty in modeling phonetic similarities across different languages, code-switching speech recognition presents a formidable challenge. This study proposes a Collaborative-MoE, a Mixture of Experts (MoE) model that leverages a collaborative mechanism among expert groups. Init...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Due to the inherent difficulty in modeling phonetic similarities across
different languages, code-switching speech recognition presents a formidable
challenge. This study proposes a Collaborative-MoE, a Mixture of Experts (MoE)
model that leverages a collaborative mechanism among expert groups. Initially,
a preceding routing network explicitly learns Language Identification (LID)
tasks and selects experts based on acquired LID weights. This process ensures
robust routing information to the MoE layer, mitigating interference from
diverse language domains on expert network parameter updates. The LID weights
are also employed to facilitate inter-group collaboration, enabling the
integration of language-specific representations. Furthermore, within each
language expert group, a gating network operates unsupervised to foster
collaboration on attributes beyond language. Extensive experiments demonstrate
the efficacy of our approach, achieving significant performance enhancements
compared to alternative methods. Importantly, our method preserves the
efficient inference capabilities characteristic of MoE models without
necessitating additional pre-training. |
---|---|
DOI: | 10.48550/arxiv.2409.02050 |