Cross-lingual Word Sense Disambiguation using mBERT Embeddings with Syntactic Dependencies

Cross-lingual word sense disambiguation (WSD) tackles the challenge of disambiguating ambiguous words across languages given context. The pre-trained BERT embedding model has been proven to be effective in extracting contextual information of words, and have been incorporated as features into many s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Zhu, Xingran
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhu, Xingran
description Cross-lingual word sense disambiguation (WSD) tackles the challenge of disambiguating ambiguous words across languages given context. The pre-trained BERT embedding model has been proven to be effective in extracting contextual information of words, and have been incorporated as features into many state-of-the-art WSD systems. In order to investigate how syntactic information can be added into the BERT embeddings to result in both semantics- and syntax-incorporated word embeddings, this project proposes the concatenated embeddings by producing dependency parse tress and encoding the relative relationships of words into the input embeddings. Two methods are also proposed to reduce the size of the concatenated embeddings. The experimental results show that the high dimensionality of the syntax-incorporated embeddings constitute an obstacle for the classification task, which needs to be further addressed in future studies.
doi_str_mv 10.48550/arxiv.2012.05300
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2012_05300</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2012_05300</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-c93514b5244fa0fa6d6f109d7d3916d478df6384daaa4e8a1092f1ae68b1d0a53</originalsourceid><addsrcrecordid>eNotj8tKxDAYhbNxIaMP4Mq8QGvSXJoutVMvMCA4BWE25W__ZAy06dB01Hl76-jqcC4c-Ai54SyVRil2B9O3_0wzxrOUKcHYJdmV0xhj0vuwP0JP38cJ6daGaOnaRxhav8SzHwM9xmVCh4fqrabV0FrExUf65ecPuj2FGbrZd3RtDzagDZ238YpcOOijvf7XFakfq7p8TjavTy_l_SYBnbOkK4TislWZlA6YA43acVZgjqLgGmVu0GlhJAKAtAaWLnMcrDYtRwZKrMjt3-0ZrjlMfoDp1PxCNmdI8QO4gk1W</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Cross-lingual Word Sense Disambiguation using mBERT Embeddings with Syntactic Dependencies</title><source>arXiv.org</source><creator>Zhu, Xingran</creator><creatorcontrib>Zhu, Xingran</creatorcontrib><description>Cross-lingual word sense disambiguation (WSD) tackles the challenge of disambiguating ambiguous words across languages given context. The pre-trained BERT embedding model has been proven to be effective in extracting contextual information of words, and have been incorporated as features into many state-of-the-art WSD systems. In order to investigate how syntactic information can be added into the BERT embeddings to result in both semantics- and syntax-incorporated word embeddings, this project proposes the concatenated embeddings by producing dependency parse tress and encoding the relative relationships of words into the input embeddings. Two methods are also proposed to reduce the size of the concatenated embeddings. The experimental results show that the high dimensionality of the syntax-incorporated embeddings constitute an obstacle for the classification task, which needs to be further addressed in future studies.</description><identifier>DOI: 10.48550/arxiv.2012.05300</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2020-12</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2012.05300$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2012.05300$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhu, Xingran</creatorcontrib><title>Cross-lingual Word Sense Disambiguation using mBERT Embeddings with Syntactic Dependencies</title><description>Cross-lingual word sense disambiguation (WSD) tackles the challenge of disambiguating ambiguous words across languages given context. The pre-trained BERT embedding model has been proven to be effective in extracting contextual information of words, and have been incorporated as features into many state-of-the-art WSD systems. In order to investigate how syntactic information can be added into the BERT embeddings to result in both semantics- and syntax-incorporated word embeddings, this project proposes the concatenated embeddings by producing dependency parse tress and encoding the relative relationships of words into the input embeddings. Two methods are also proposed to reduce the size of the concatenated embeddings. The experimental results show that the high dimensionality of the syntax-incorporated embeddings constitute an obstacle for the classification task, which needs to be further addressed in future studies.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tKxDAYhbNxIaMP4Mq8QGvSXJoutVMvMCA4BWE25W__ZAy06dB01Hl76-jqcC4c-Ai54SyVRil2B9O3_0wzxrOUKcHYJdmV0xhj0vuwP0JP38cJ6daGaOnaRxhav8SzHwM9xmVCh4fqrabV0FrExUf65ecPuj2FGbrZd3RtDzagDZ238YpcOOijvf7XFakfq7p8TjavTy_l_SYBnbOkK4TislWZlA6YA43acVZgjqLgGmVu0GlhJAKAtAaWLnMcrDYtRwZKrMjt3-0ZrjlMfoDp1PxCNmdI8QO4gk1W</recordid><startdate>20201209</startdate><enddate>20201209</enddate><creator>Zhu, Xingran</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20201209</creationdate><title>Cross-lingual Word Sense Disambiguation using mBERT Embeddings with Syntactic Dependencies</title><author>Zhu, Xingran</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-c93514b5244fa0fa6d6f109d7d3916d478df6384daaa4e8a1092f1ae68b1d0a53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhu, Xingran</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhu, Xingran</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Cross-lingual Word Sense Disambiguation using mBERT Embeddings with Syntactic Dependencies</atitle><date>2020-12-09</date><risdate>2020</risdate><abstract>Cross-lingual word sense disambiguation (WSD) tackles the challenge of disambiguating ambiguous words across languages given context. The pre-trained BERT embedding model has been proven to be effective in extracting contextual information of words, and have been incorporated as features into many state-of-the-art WSD systems. In order to investigate how syntactic information can be added into the BERT embeddings to result in both semantics- and syntax-incorporated word embeddings, this project proposes the concatenated embeddings by producing dependency parse tress and encoding the relative relationships of words into the input embeddings. Two methods are also proposed to reduce the size of the concatenated embeddings. The experimental results show that the high dimensionality of the syntax-incorporated embeddings constitute an obstacle for the classification task, which needs to be further addressed in future studies.</abstract><doi>10.48550/arxiv.2012.05300</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2012.05300
ispartof
issn
language eng
recordid cdi_arxiv_primary_2012_05300
source arXiv.org
subjects Computer Science - Computation and Language
title Cross-lingual Word Sense Disambiguation using mBERT Embeddings with Syntactic Dependencies
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T13%3A02%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Cross-lingual%20Word%20Sense%20Disambiguation%20using%20mBERT%20Embeddings%20with%20Syntactic%20Dependencies&rft.au=Zhu,%20Xingran&rft.date=2020-12-09&rft_id=info:doi/10.48550/arxiv.2012.05300&rft_dat=%3Carxiv_GOX%3E2012_05300%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true