LU-BZU at SemEval-2021 Task 2: Word2Vec and Lemma2Vec performance in Arabic Word-in-Context disambiguation
This paper presents a set of experiments to evaluate and compare between the performance of using CBOW Word2Vec and Lemma2Vec models for Arabic Word-in-Context (WiC) disambiguation without using sense inventories or sense embeddings. As part of the SemEval-2021 Shared Task 2 on WiC disambiguation, w...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper presents a set of experiments to evaluate and compare between the
performance of using CBOW Word2Vec and Lemma2Vec models for Arabic
Word-in-Context (WiC) disambiguation without using sense inventories or sense
embeddings. As part of the SemEval-2021 Shared Task 2 on WiC disambiguation, we
used the dev.ar-ar dataset (2k sentence pairs) to decide whether two words in a
given sentence pair carry the same meaning. We used two Word2Vec models:
Wiki-CBOW, a pre-trained model on Arabic Wikipedia, and another model we
trained on large Arabic corpora of about 3 billion tokens. Two Lemma2Vec models
was also constructed based on the two Word2Vec models. Each of the four models
was then used in the WiC disambiguation task, and then evaluated on the
SemEval-2021 test.ar-ar dataset. At the end, we reported the performance of
different models and compared between using lemma-based and word-based models. |
---|---|
DOI: | 10.48550/arxiv.2104.08110 |