Language-Independent Tokenisation Rivals Language-Specific Tokenisation for Word Similarity Prediction
Language-independent tokenisation (LIT) methods that do not require labelled language resources or lexicons have recently gained popularity because of their applicability in resource-poor languages. Moreover, they compactly represent a language using a fixed size vocabulary and can efficiently handl...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Bollegala, Danushka Kiryo, Ryuichi Tsujino, Kosuke Yukawa, Haruki |
description | Language-independent tokenisation (LIT) methods that do not require labelled
language resources or lexicons have recently gained popularity because of their
applicability in resource-poor languages. Moreover, they compactly represent a
language using a fixed size vocabulary and can efficiently handle unseen or
rare words. On the other hand, language-specific tokenisation (LST) methods
have a long and established history, and are developed using carefully created
lexicons and training resources. Unlike subtokens produced by LIT methods, LST
methods produce valid morphological subwords. Despite the contrasting
trade-offs between LIT vs. LST methods, their performance on downstream NLP
tasks remain unclear. In this paper, we empirically compare the two approaches
using semantic similarity measurement as an evaluation task across a diverse
set of languages. Our experimental results covering eight languages show that
LST consistently outperforms LIT when the vocabulary size is large, but LIT can
produce comparable or better results than LST in many languages with
comparatively smaller (i.e. less than 100K words) vocabulary sizes, encouraging
the use of LIT when language-specific resources are unavailable, incomplete or
a smaller model is required. Moreover, we find that smoothed inverse frequency
(SIF) to be an accurate method to create word embeddings from subword
embeddings for multilingual semantic similarity prediction tasks. Further
analysis of the nearest neighbours of tokens show that semantically and
syntactically related tokens are closely embedded in subword embedding spaces |
doi_str_mv | 10.48550/arxiv.2002.11004 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2002_11004</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2002_11004</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-409d4e879767511d515b5981d9235596953664c3032a527f0ef6aabc2378f57d3</originalsourceid><addsrcrecordid>eNpVj81Kw0AUhWfjQqoP4Mp5gcT5uzOZpRR_CgHFBlyG28xMudgmYRqLfXvbKoKbcxbn48DH2I0UpakAxB3mL9qXSghVSimEuWSpxn79ietYLPoQx3iMfuLN8BF72uFEQ8_faI-bHf8Dl2PsKFH3n0pD5u9DDnxJW9pgpunAX3MM1J3mK3aRjifx-rdnrHl8aObPRf3ytJjf1wVaZwojfDCxct5ZB1IGkLACX8nglQbw1oO21nRaaIWgXBIxWcRVp7SrErigZ-z25_Ys2o6ZtpgP7Um4PQvrb0GzURQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Language-Independent Tokenisation Rivals Language-Specific Tokenisation for Word Similarity Prediction</title><source>arXiv.org</source><creator>Bollegala, Danushka ; Kiryo, Ryuichi ; Tsujino, Kosuke ; Yukawa, Haruki</creator><creatorcontrib>Bollegala, Danushka ; Kiryo, Ryuichi ; Tsujino, Kosuke ; Yukawa, Haruki</creatorcontrib><description>Language-independent tokenisation (LIT) methods that do not require labelled
language resources or lexicons have recently gained popularity because of their
applicability in resource-poor languages. Moreover, they compactly represent a
language using a fixed size vocabulary and can efficiently handle unseen or
rare words. On the other hand, language-specific tokenisation (LST) methods
have a long and established history, and are developed using carefully created
lexicons and training resources. Unlike subtokens produced by LIT methods, LST
methods produce valid morphological subwords. Despite the contrasting
trade-offs between LIT vs. LST methods, their performance on downstream NLP
tasks remain unclear. In this paper, we empirically compare the two approaches
using semantic similarity measurement as an evaluation task across a diverse
set of languages. Our experimental results covering eight languages show that
LST consistently outperforms LIT when the vocabulary size is large, but LIT can
produce comparable or better results than LST in many languages with
comparatively smaller (i.e. less than 100K words) vocabulary sizes, encouraging
the use of LIT when language-specific resources are unavailable, incomplete or
a smaller model is required. Moreover, we find that smoothed inverse frequency
(SIF) to be an accurate method to create word embeddings from subword
embeddings for multilingual semantic similarity prediction tasks. Further
analysis of the nearest neighbours of tokens show that semantically and
syntactically related tokens are closely embedded in subword embedding spaces</description><identifier>DOI: 10.48550/arxiv.2002.11004</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2020-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2002.11004$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2002.11004$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Bollegala, Danushka</creatorcontrib><creatorcontrib>Kiryo, Ryuichi</creatorcontrib><creatorcontrib>Tsujino, Kosuke</creatorcontrib><creatorcontrib>Yukawa, Haruki</creatorcontrib><title>Language-Independent Tokenisation Rivals Language-Specific Tokenisation for Word Similarity Prediction</title><description>Language-independent tokenisation (LIT) methods that do not require labelled
language resources or lexicons have recently gained popularity because of their
applicability in resource-poor languages. Moreover, they compactly represent a
language using a fixed size vocabulary and can efficiently handle unseen or
rare words. On the other hand, language-specific tokenisation (LST) methods
have a long and established history, and are developed using carefully created
lexicons and training resources. Unlike subtokens produced by LIT methods, LST
methods produce valid morphological subwords. Despite the contrasting
trade-offs between LIT vs. LST methods, their performance on downstream NLP
tasks remain unclear. In this paper, we empirically compare the two approaches
using semantic similarity measurement as an evaluation task across a diverse
set of languages. Our experimental results covering eight languages show that
LST consistently outperforms LIT when the vocabulary size is large, but LIT can
produce comparable or better results than LST in many languages with
comparatively smaller (i.e. less than 100K words) vocabulary sizes, encouraging
the use of LIT when language-specific resources are unavailable, incomplete or
a smaller model is required. Moreover, we find that smoothed inverse frequency
(SIF) to be an accurate method to create word embeddings from subword
embeddings for multilingual semantic similarity prediction tasks. Further
analysis of the nearest neighbours of tokens show that semantically and
syntactically related tokens are closely embedded in subword embedding spaces</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpVj81Kw0AUhWfjQqoP4Mp5gcT5uzOZpRR_CgHFBlyG28xMudgmYRqLfXvbKoKbcxbn48DH2I0UpakAxB3mL9qXSghVSimEuWSpxn79ietYLPoQx3iMfuLN8BF72uFEQ8_faI-bHf8Dl2PsKFH3n0pD5u9DDnxJW9pgpunAX3MM1J3mK3aRjifx-rdnrHl8aObPRf3ytJjf1wVaZwojfDCxct5ZB1IGkLACX8nglQbw1oO21nRaaIWgXBIxWcRVp7SrErigZ-z25_Ys2o6ZtpgP7Um4PQvrb0GzURQ</recordid><startdate>20200225</startdate><enddate>20200225</enddate><creator>Bollegala, Danushka</creator><creator>Kiryo, Ryuichi</creator><creator>Tsujino, Kosuke</creator><creator>Yukawa, Haruki</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200225</creationdate><title>Language-Independent Tokenisation Rivals Language-Specific Tokenisation for Word Similarity Prediction</title><author>Bollegala, Danushka ; Kiryo, Ryuichi ; Tsujino, Kosuke ; Yukawa, Haruki</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-409d4e879767511d515b5981d9235596953664c3032a527f0ef6aabc2378f57d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Bollegala, Danushka</creatorcontrib><creatorcontrib>Kiryo, Ryuichi</creatorcontrib><creatorcontrib>Tsujino, Kosuke</creatorcontrib><creatorcontrib>Yukawa, Haruki</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bollegala, Danushka</au><au>Kiryo, Ryuichi</au><au>Tsujino, Kosuke</au><au>Yukawa, Haruki</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Language-Independent Tokenisation Rivals Language-Specific Tokenisation for Word Similarity Prediction</atitle><date>2020-02-25</date><risdate>2020</risdate><abstract>Language-independent tokenisation (LIT) methods that do not require labelled
language resources or lexicons have recently gained popularity because of their
applicability in resource-poor languages. Moreover, they compactly represent a
language using a fixed size vocabulary and can efficiently handle unseen or
rare words. On the other hand, language-specific tokenisation (LST) methods
have a long and established history, and are developed using carefully created
lexicons and training resources. Unlike subtokens produced by LIT methods, LST
methods produce valid morphological subwords. Despite the contrasting
trade-offs between LIT vs. LST methods, their performance on downstream NLP
tasks remain unclear. In this paper, we empirically compare the two approaches
using semantic similarity measurement as an evaluation task across a diverse
set of languages. Our experimental results covering eight languages show that
LST consistently outperforms LIT when the vocabulary size is large, but LIT can
produce comparable or better results than LST in many languages with
comparatively smaller (i.e. less than 100K words) vocabulary sizes, encouraging
the use of LIT when language-specific resources are unavailable, incomplete or
a smaller model is required. Moreover, we find that smoothed inverse frequency
(SIF) to be an accurate method to create word embeddings from subword
embeddings for multilingual semantic similarity prediction tasks. Further
analysis of the nearest neighbours of tokens show that semantically and
syntactically related tokens are closely embedded in subword embedding spaces</abstract><doi>10.48550/arxiv.2002.11004</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2002.11004 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2002_11004 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language Computer Science - Learning |
title | Language-Independent Tokenisation Rivals Language-Specific Tokenisation for Word Similarity Prediction |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T18%3A49%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Language-Independent%20Tokenisation%20Rivals%20Language-Specific%20Tokenisation%20for%20Word%20Similarity%20Prediction&rft.au=Bollegala,%20Danushka&rft.date=2020-02-25&rft_id=info:doi/10.48550/arxiv.2002.11004&rft_dat=%3Carxiv_GOX%3E2002_11004%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |