GL-CLeF: A Global-Local Contrastive Learning Framework for Cross-lingual Spoken Language Understanding
Due to high data demands of current methods, attention to zero-shot cross-lingual spoken language understanding (SLU) has grown, as such approaches greatly reduce human annotation effort. However, existing models solely rely on shared parameters, which can only perform implicit alignment across lang...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Due to high data demands of current methods, attention to zero-shot
cross-lingual spoken language understanding (SLU) has grown, as such approaches
greatly reduce human annotation effort. However, existing models solely rely on
shared parameters, which can only perform implicit alignment across languages.
We present Global--Local Contrastive Learning Framework (GL-CLeF) to address
this shortcoming. Specifically, we employ contrastive learning, leveraging
bilingual dictionaries to construct multilingual views of the same utterance,
then encourage their representations to be more similar than negative example
pairs, which achieves to explicitly aligned representations of similar
sentences across languages. In addition, a key step in GL-CLeF is a proposed
Local and Global component, which achieves a fine-grained cross-lingual
transfer (i.e., sentence-level Local intent transfer, token-level Local slot
transfer, and semantic-level Global transfer across intent and slot).
Experiments on MultiATIS++ show that GL-CLeF achieves the best performance and
successfully pulls representations of similar sentences across languages
closer. |
---|---|
DOI: | 10.48550/arxiv.2204.08325 |