Sense representations for Portuguese: experiments with sense embeddings and deep neural language models
Sense representations have gone beyond word representations like Word2Vec, GloVe and FastText and achieved innovative performance on a wide range of natural language processing tasks. Although very useful in many applications, the traditional approaches for generating word embeddings have a strict d...
Gespeichert in:
Veröffentlicht in: | Language resources and evaluation 2021-12, Vol.55 (4), p.901-924 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 924 |
---|---|
container_issue | 4 |
container_start_page | 901 |
container_title | Language resources and evaluation |
container_volume | 55 |
creator | Rodrigues da Silva, Jéssica Caseli, Helena de M. |
description | Sense representations have gone beyond word representations like Word2Vec, GloVe and FastText and achieved innovative performance on a wide range of natural language processing tasks. Although very useful in many applications, the traditional approaches for generating word embeddings have a strict drawback: they produce a single vector representation for a given word ignoring the fact that ambiguous words can assume different meanings. In this paper, we explore unsupervised sense representations which, different from traditional word embeddings, are able to induce different senses of a word by analyzing its contextual semantics in a text. The unsupervised sense representations investigated in this paper are: sense embeddings and deep neural language models. We present the first experiments carried out for generating sense embeddings for Portuguese. Our experiments show that the sense embedding model (Sense2vec) outperformed traditional word embeddings in syntactic and semantic analogies task, proving that the language resource generated here can improve the performance of NLP tasks in Portuguese. We also evaluated the performance of pre-trained deep neural language models (ELMo and BERT) in two transfer learning approaches: feature based and fine-tuning, in the semantic textual similarity task. Our experiments indicate that the fine tuned Multilingual and Portuguese BERT language models were able to achieve better accuracy than the ELMo model and baselines. |
doi_str_mv | 10.1007/s10579-020-09525-1 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2580827246</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2580827246</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-2da91db623e67ce42cb91986667fe7173f30cbb83c7fe3280826aa5632caf45e3</originalsourceid><addsrcrecordid>eNp9kMtKxDAUhoMoOI6-gKuA62ouTdK6k8EbDCio4C6k7Wnt0CY1aVHf3sxUdOfqXPj-c_kROqXknBKiLgIlQuUJYSQhuWAioXtoQYVKY4tm-785eT1ERyFsCElZqrIFap7ABsAeBg8B7GjG1tmAa-fxo_Pj1EyxfYnhcwDf9hEI-KMd33DYyaAvoKpa2wRsbIUrgAFbmLzpcGdsM5kGcO8q6MIxOqhNF-DkJy7Ry8318-ouWT_c3q-u1knJaT4mrDI5rQrJOEhVQsrKIqd5JqVUNSiqeM1JWRQZL2PNWUYyJo0RkrPS1KkAvkRn89zBu_d4-6g3bvI2rtRMbHHFUhkpNlOldyF4qPUQvzP-S1Oit4bq2VAdHdM7QzWNIj6LQoRtA_5v9D-qb-uneos</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2580827246</pqid></control><display><type>article</type><title>Sense representations for Portuguese: experiments with sense embeddings and deep neural language models</title><source>SpringerLink Journals - AutoHoldings</source><creator>Rodrigues da Silva, Jéssica ; Caseli, Helena de M.</creator><creatorcontrib>Rodrigues da Silva, Jéssica ; Caseli, Helena de M.</creatorcontrib><description>Sense representations have gone beyond word representations like Word2Vec, GloVe and FastText and achieved innovative performance on a wide range of natural language processing tasks. Although very useful in many applications, the traditional approaches for generating word embeddings have a strict drawback: they produce a single vector representation for a given word ignoring the fact that ambiguous words can assume different meanings. In this paper, we explore unsupervised sense representations which, different from traditional word embeddings, are able to induce different senses of a word by analyzing its contextual semantics in a text. The unsupervised sense representations investigated in this paper are: sense embeddings and deep neural language models. We present the first experiments carried out for generating sense embeddings for Portuguese. Our experiments show that the sense embedding model (Sense2vec) outperformed traditional word embeddings in syntactic and semantic analogies task, proving that the language resource generated here can improve the performance of NLP tasks in Portuguese. We also evaluated the performance of pre-trained deep neural language models (ELMo and BERT) in two transfer learning approaches: feature based and fine-tuning, in the semantic textual similarity task. Our experiments indicate that the fine tuned Multilingual and Portuguese BERT language models were able to achieve better accuracy than the ELMo model and baselines.</description><identifier>ISSN: 1574-020X</identifier><identifier>EISSN: 1574-0218</identifier><identifier>DOI: 10.1007/s10579-020-09525-1</identifier><language>eng</language><publisher>Dordrecht: Springer Netherlands</publisher><subject>Computational Linguistics ; Computer Science ; Experiments ; Language ; Language and Literature ; Language modeling ; Linguistics ; Natural language processing ; Original Paper ; Performance enhancement ; Performance evaluation ; Portuguese language ; Representations ; Semantic analysis ; Semantics ; Social Sciences ; Syntax ; Words (language)</subject><ispartof>Language resources and evaluation, 2021-12, Vol.55 (4), p.901-924</ispartof><rights>The Author(s), under exclusive licence to Springer Nature B.V. part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer Nature B.V. part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-2da91db623e67ce42cb91986667fe7173f30cbb83c7fe3280826aa5632caf45e3</citedby><cites>FETCH-LOGICAL-c319t-2da91db623e67ce42cb91986667fe7173f30cbb83c7fe3280826aa5632caf45e3</cites><orcidid>0000-0001-6275-6039 ; 0000-0003-3996-8599</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10579-020-09525-1$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10579-020-09525-1$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Rodrigues da Silva, Jéssica</creatorcontrib><creatorcontrib>Caseli, Helena de M.</creatorcontrib><title>Sense representations for Portuguese: experiments with sense embeddings and deep neural language models</title><title>Language resources and evaluation</title><addtitle>Lang Resources & Evaluation</addtitle><description>Sense representations have gone beyond word representations like Word2Vec, GloVe and FastText and achieved innovative performance on a wide range of natural language processing tasks. Although very useful in many applications, the traditional approaches for generating word embeddings have a strict drawback: they produce a single vector representation for a given word ignoring the fact that ambiguous words can assume different meanings. In this paper, we explore unsupervised sense representations which, different from traditional word embeddings, are able to induce different senses of a word by analyzing its contextual semantics in a text. The unsupervised sense representations investigated in this paper are: sense embeddings and deep neural language models. We present the first experiments carried out for generating sense embeddings for Portuguese. Our experiments show that the sense embedding model (Sense2vec) outperformed traditional word embeddings in syntactic and semantic analogies task, proving that the language resource generated here can improve the performance of NLP tasks in Portuguese. We also evaluated the performance of pre-trained deep neural language models (ELMo and BERT) in two transfer learning approaches: feature based and fine-tuning, in the semantic textual similarity task. Our experiments indicate that the fine tuned Multilingual and Portuguese BERT language models were able to achieve better accuracy than the ELMo model and baselines.</description><subject>Computational Linguistics</subject><subject>Computer Science</subject><subject>Experiments</subject><subject>Language</subject><subject>Language and Literature</subject><subject>Language modeling</subject><subject>Linguistics</subject><subject>Natural language processing</subject><subject>Original Paper</subject><subject>Performance enhancement</subject><subject>Performance evaluation</subject><subject>Portuguese language</subject><subject>Representations</subject><subject>Semantic analysis</subject><subject>Semantics</subject><subject>Social Sciences</subject><subject>Syntax</subject><subject>Words (language)</subject><issn>1574-020X</issn><issn>1574-0218</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AIMQZ</sourceid><sourceid>AVQMV</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>K50</sourceid><sourceid>M1D</sourceid><sourceid>M2O</sourceid><recordid>eNp9kMtKxDAUhoMoOI6-gKuA62ouTdK6k8EbDCio4C6k7Wnt0CY1aVHf3sxUdOfqXPj-c_kROqXknBKiLgIlQuUJYSQhuWAioXtoQYVKY4tm-785eT1ERyFsCElZqrIFap7ABsAeBg8B7GjG1tmAa-fxo_Pj1EyxfYnhcwDf9hEI-KMd33DYyaAvoKpa2wRsbIUrgAFbmLzpcGdsM5kGcO8q6MIxOqhNF-DkJy7Ry8318-ouWT_c3q-u1knJaT4mrDI5rQrJOEhVQsrKIqd5JqVUNSiqeM1JWRQZL2PNWUYyJo0RkrPS1KkAvkRn89zBu_d4-6g3bvI2rtRMbHHFUhkpNlOldyF4qPUQvzP-S1Oit4bq2VAdHdM7QzWNIj6LQoRtA_5v9D-qb-uneos</recordid><startdate>20211201</startdate><enddate>20211201</enddate><creator>Rodrigues da Silva, Jéssica</creator><creator>Caseli, Helena de M.</creator><general>Springer Netherlands</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7T9</scope><scope>7XB</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AIMQZ</scope><scope>ALSLI</scope><scope>ARAPS</scope><scope>AVQMV</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>CPGLG</scope><scope>CRLPW</scope><scope>DWQXO</scope><scope>GB0</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K50</scope><scope>K7-</scope><scope>L7M</scope><scope>LIQON</scope><scope>L~C</scope><scope>L~D</scope><scope>M0N</scope><scope>M1D</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-6275-6039</orcidid><orcidid>https://orcid.org/0000-0003-3996-8599</orcidid></search><sort><creationdate>20211201</creationdate><title>Sense representations for Portuguese: experiments with sense embeddings and deep neural language models</title><author>Rodrigues da Silva, Jéssica ; Caseli, Helena de M.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-2da91db623e67ce42cb91986667fe7173f30cbb83c7fe3280826aa5632caf45e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computational Linguistics</topic><topic>Computer Science</topic><topic>Experiments</topic><topic>Language</topic><topic>Language and Literature</topic><topic>Language modeling</topic><topic>Linguistics</topic><topic>Natural language processing</topic><topic>Original Paper</topic><topic>Performance enhancement</topic><topic>Performance evaluation</topic><topic>Portuguese language</topic><topic>Representations</topic><topic>Semantic analysis</topic><topic>Semantics</topic><topic>Social Sciences</topic><topic>Syntax</topic><topic>Words (language)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Rodrigues da Silva, Jéssica</creatorcontrib><creatorcontrib>Caseli, Helena de M.</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Linguistics and Language Behavior Abstracts (LLBA)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest One Literature</collection><collection>Social Science Premium Collection</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>Arts Premium Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>Linguistics Collection</collection><collection>Linguistics Database</collection><collection>ProQuest Central Korea</collection><collection>DELNET Social Sciences & Humanities Collection</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Art, Design & Architecture Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>One Literature (ProQuest)</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Computing Database</collection><collection>Arts & Humanities Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>Language resources and evaluation</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Rodrigues da Silva, Jéssica</au><au>Caseli, Helena de M.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Sense representations for Portuguese: experiments with sense embeddings and deep neural language models</atitle><jtitle>Language resources and evaluation</jtitle><stitle>Lang Resources & Evaluation</stitle><date>2021-12-01</date><risdate>2021</risdate><volume>55</volume><issue>4</issue><spage>901</spage><epage>924</epage><pages>901-924</pages><issn>1574-020X</issn><eissn>1574-0218</eissn><abstract>Sense representations have gone beyond word representations like Word2Vec, GloVe and FastText and achieved innovative performance on a wide range of natural language processing tasks. Although very useful in many applications, the traditional approaches for generating word embeddings have a strict drawback: they produce a single vector representation for a given word ignoring the fact that ambiguous words can assume different meanings. In this paper, we explore unsupervised sense representations which, different from traditional word embeddings, are able to induce different senses of a word by analyzing its contextual semantics in a text. The unsupervised sense representations investigated in this paper are: sense embeddings and deep neural language models. We present the first experiments carried out for generating sense embeddings for Portuguese. Our experiments show that the sense embedding model (Sense2vec) outperformed traditional word embeddings in syntactic and semantic analogies task, proving that the language resource generated here can improve the performance of NLP tasks in Portuguese. We also evaluated the performance of pre-trained deep neural language models (ELMo and BERT) in two transfer learning approaches: feature based and fine-tuning, in the semantic textual similarity task. Our experiments indicate that the fine tuned Multilingual and Portuguese BERT language models were able to achieve better accuracy than the ELMo model and baselines.</abstract><cop>Dordrecht</cop><pub>Springer Netherlands</pub><doi>10.1007/s10579-020-09525-1</doi><tpages>24</tpages><orcidid>https://orcid.org/0000-0001-6275-6039</orcidid><orcidid>https://orcid.org/0000-0003-3996-8599</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1574-020X |
ispartof | Language resources and evaluation, 2021-12, Vol.55 (4), p.901-924 |
issn | 1574-020X 1574-0218 |
language | eng |
recordid | cdi_proquest_journals_2580827246 |
source | SpringerLink Journals - AutoHoldings |
subjects | Computational Linguistics Computer Science Experiments Language Language and Literature Language modeling Linguistics Natural language processing Original Paper Performance enhancement Performance evaluation Portuguese language Representations Semantic analysis Semantics Social Sciences Syntax Words (language) |
title | Sense representations for Portuguese: experiments with sense embeddings and deep neural language models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T00%3A21%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Sense%20representations%20for%20Portuguese:%20experiments%20with%20sense%20embeddings%20and%20deep%20neural%20language%20models&rft.jtitle=Language%20resources%20and%20evaluation&rft.au=Rodrigues%20da%20Silva,%20J%C3%A9ssica&rft.date=2021-12-01&rft.volume=55&rft.issue=4&rft.spage=901&rft.epage=924&rft.pages=901-924&rft.issn=1574-020X&rft.eissn=1574-0218&rft_id=info:doi/10.1007/s10579-020-09525-1&rft_dat=%3Cproquest_cross%3E2580827246%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2580827246&rft_id=info:pmid/&rfr_iscdi=true |