GL-CLeF: A Global-Local Contrastive Learning Framework for Cross-lingual Spoken Language Understanding

Due to high data demands of current methods, attention to zero-shot cross-lingual spoken language understanding (SLU) has grown, as such approaches greatly reduce human annotation effort. However, existing models solely rely on shared parameters, which can only perform implicit alignment across lang...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Qin, Libo, Chen, Qiguang, Xie, Tianbao, Li, Qixin, Lou, Jian-Guang, Che, Wanxiang, Kan, Min-Yen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Qin, Libo
Chen, Qiguang
Xie, Tianbao
Li, Qixin
Lou, Jian-Guang
Che, Wanxiang
Kan, Min-Yen
description Due to high data demands of current methods, attention to zero-shot cross-lingual spoken language understanding (SLU) has grown, as such approaches greatly reduce human annotation effort. However, existing models solely rely on shared parameters, which can only perform implicit alignment across languages. We present Global--Local Contrastive Learning Framework (GL-CLeF) to address this shortcoming. Specifically, we employ contrastive learning, leveraging bilingual dictionaries to construct multilingual views of the same utterance, then encourage their representations to be more similar than negative example pairs, which achieves to explicitly aligned representations of similar sentences across languages. In addition, a key step in GL-CLeF is a proposed Local and Global component, which achieves a fine-grained cross-lingual transfer (i.e., sentence-level Local intent transfer, token-level Local slot transfer, and semantic-level Global transfer across intent and slot). Experiments on MultiATIS++ show that GL-CLeF achieves the best performance and successfully pulls representations of similar sentences across languages closer.
doi_str_mv 10.48550/arxiv.2204.08325
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2204_08325</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2204_08325</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-175913f248bfd558a31a5c5fd711ed4e89e05358767ebcdeae235ecfa3e805d33</originalsourceid><addsrcrecordid>eNotj71OwzAUhb0woMIDMOEXcLDj3MZlqyIakCx1oMzRTXxdRU3tygkF3p60MB0dnR_pY-xByawwAPIJ03d_zvJcFpk0Oodb5msrKkubZ77m9RBbHISNHQ68imFKOE79mbglTKEPe75JeKSvmA7cx8SrFMdRDHPwOQ_eT_FAgVu82D3xj-AojRMGNxfu2I3HYaT7f12w3eZlV70Ku63fqrUVuCxBqBJWSvu8MK13AAa1QujAu1IpcgWZFUnQYMplSW3nCCnXQJ1HTUaC03rBHv9ur6DNKfVHTD_NBbi5AutfL1pRMw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>GL-CLeF: A Global-Local Contrastive Learning Framework for Cross-lingual Spoken Language Understanding</title><source>arXiv.org</source><creator>Qin, Libo ; Chen, Qiguang ; Xie, Tianbao ; Li, Qixin ; Lou, Jian-Guang ; Che, Wanxiang ; Kan, Min-Yen</creator><creatorcontrib>Qin, Libo ; Chen, Qiguang ; Xie, Tianbao ; Li, Qixin ; Lou, Jian-Guang ; Che, Wanxiang ; Kan, Min-Yen</creatorcontrib><description>Due to high data demands of current methods, attention to zero-shot cross-lingual spoken language understanding (SLU) has grown, as such approaches greatly reduce human annotation effort. However, existing models solely rely on shared parameters, which can only perform implicit alignment across languages. We present Global--Local Contrastive Learning Framework (GL-CLeF) to address this shortcoming. Specifically, we employ contrastive learning, leveraging bilingual dictionaries to construct multilingual views of the same utterance, then encourage their representations to be more similar than negative example pairs, which achieves to explicitly aligned representations of similar sentences across languages. In addition, a key step in GL-CLeF is a proposed Local and Global component, which achieves a fine-grained cross-lingual transfer (i.e., sentence-level Local intent transfer, token-level Local slot transfer, and semantic-level Global transfer across intent and slot). Experiments on MultiATIS++ show that GL-CLeF achieves the best performance and successfully pulls representations of similar sentences across languages closer.</description><identifier>DOI: 10.48550/arxiv.2204.08325</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2022-04</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2204.08325$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2204.08325$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Qin, Libo</creatorcontrib><creatorcontrib>Chen, Qiguang</creatorcontrib><creatorcontrib>Xie, Tianbao</creatorcontrib><creatorcontrib>Li, Qixin</creatorcontrib><creatorcontrib>Lou, Jian-Guang</creatorcontrib><creatorcontrib>Che, Wanxiang</creatorcontrib><creatorcontrib>Kan, Min-Yen</creatorcontrib><title>GL-CLeF: A Global-Local Contrastive Learning Framework for Cross-lingual Spoken Language Understanding</title><description>Due to high data demands of current methods, attention to zero-shot cross-lingual spoken language understanding (SLU) has grown, as such approaches greatly reduce human annotation effort. However, existing models solely rely on shared parameters, which can only perform implicit alignment across languages. We present Global--Local Contrastive Learning Framework (GL-CLeF) to address this shortcoming. Specifically, we employ contrastive learning, leveraging bilingual dictionaries to construct multilingual views of the same utterance, then encourage their representations to be more similar than negative example pairs, which achieves to explicitly aligned representations of similar sentences across languages. In addition, a key step in GL-CLeF is a proposed Local and Global component, which achieves a fine-grained cross-lingual transfer (i.e., sentence-level Local intent transfer, token-level Local slot transfer, and semantic-level Global transfer across intent and slot). Experiments on MultiATIS++ show that GL-CLeF achieves the best performance and successfully pulls representations of similar sentences across languages closer.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71OwzAUhb0woMIDMOEXcLDj3MZlqyIakCx1oMzRTXxdRU3tygkF3p60MB0dnR_pY-xByawwAPIJ03d_zvJcFpk0Oodb5msrKkubZ77m9RBbHISNHQ68imFKOE79mbglTKEPe75JeKSvmA7cx8SrFMdRDHPwOQ_eT_FAgVu82D3xj-AojRMGNxfu2I3HYaT7f12w3eZlV70Ku63fqrUVuCxBqBJWSvu8MK13AAa1QujAu1IpcgWZFUnQYMplSW3nCCnXQJ1HTUaC03rBHv9ur6DNKfVHTD_NBbi5AutfL1pRMw</recordid><startdate>20220418</startdate><enddate>20220418</enddate><creator>Qin, Libo</creator><creator>Chen, Qiguang</creator><creator>Xie, Tianbao</creator><creator>Li, Qixin</creator><creator>Lou, Jian-Guang</creator><creator>Che, Wanxiang</creator><creator>Kan, Min-Yen</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220418</creationdate><title>GL-CLeF: A Global-Local Contrastive Learning Framework for Cross-lingual Spoken Language Understanding</title><author>Qin, Libo ; Chen, Qiguang ; Xie, Tianbao ; Li, Qixin ; Lou, Jian-Guang ; Che, Wanxiang ; Kan, Min-Yen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-175913f248bfd558a31a5c5fd711ed4e89e05358767ebcdeae235ecfa3e805d33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Qin, Libo</creatorcontrib><creatorcontrib>Chen, Qiguang</creatorcontrib><creatorcontrib>Xie, Tianbao</creatorcontrib><creatorcontrib>Li, Qixin</creatorcontrib><creatorcontrib>Lou, Jian-Guang</creatorcontrib><creatorcontrib>Che, Wanxiang</creatorcontrib><creatorcontrib>Kan, Min-Yen</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Qin, Libo</au><au>Chen, Qiguang</au><au>Xie, Tianbao</au><au>Li, Qixin</au><au>Lou, Jian-Guang</au><au>Che, Wanxiang</au><au>Kan, Min-Yen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>GL-CLeF: A Global-Local Contrastive Learning Framework for Cross-lingual Spoken Language Understanding</atitle><date>2022-04-18</date><risdate>2022</risdate><abstract>Due to high data demands of current methods, attention to zero-shot cross-lingual spoken language understanding (SLU) has grown, as such approaches greatly reduce human annotation effort. However, existing models solely rely on shared parameters, which can only perform implicit alignment across languages. We present Global--Local Contrastive Learning Framework (GL-CLeF) to address this shortcoming. Specifically, we employ contrastive learning, leveraging bilingual dictionaries to construct multilingual views of the same utterance, then encourage their representations to be more similar than negative example pairs, which achieves to explicitly aligned representations of similar sentences across languages. In addition, a key step in GL-CLeF is a proposed Local and Global component, which achieves a fine-grained cross-lingual transfer (i.e., sentence-level Local intent transfer, token-level Local slot transfer, and semantic-level Global transfer across intent and slot). Experiments on MultiATIS++ show that GL-CLeF achieves the best performance and successfully pulls representations of similar sentences across languages closer.</abstract><doi>10.48550/arxiv.2204.08325</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2204.08325
ispartof
issn
language eng
recordid cdi_arxiv_primary_2204_08325
source arXiv.org
subjects Computer Science - Computation and Language
title GL-CLeF: A Global-Local Contrastive Learning Framework for Cross-lingual Spoken Language Understanding
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T02%3A30%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=GL-CLeF:%20A%20Global-Local%20Contrastive%20Learning%20Framework%20for%20Cross-lingual%20Spoken%20Language%20Understanding&rft.au=Qin,%20Libo&rft.date=2022-04-18&rft_id=info:doi/10.48550/arxiv.2204.08325&rft_dat=%3Carxiv_GOX%3E2204_08325%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true