RoBERTweet: A BERT Language Model for Romanian Tweets
Developing natural language processing (NLP) systems for social media analysis remains an important topic in artificial intelligence research. This article introduces RoBERTweet, the first Transformer architecture trained on Romanian tweets. Our RoBERTweet comes in two versions, following the base a...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Tăiatu, Iulian-Marius Avram, Andrei-Marius Cercel, Dumitru-Clementin Pop, Florin |
description | Developing natural language processing (NLP) systems for social media
analysis remains an important topic in artificial intelligence research. This
article introduces RoBERTweet, the first Transformer architecture trained on
Romanian tweets. Our RoBERTweet comes in two versions, following the base and
large architectures of BERT. The corpus used for pre-training the models
represents a novelty for the Romanian NLP community and consists of all tweets
collected from 2008 to 2022. Experiments show that RoBERTweet models outperform
the previous general-domain Romanian and multilingual language models on three
NLP tasks with tweet inputs: emotion detection, sexist language identification,
and named entity recognition. We make our models and the newly created corpus
of Romanian tweets freely available. |
doi_str_mv | 10.48550/arxiv.2306.06598 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2306_06598</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2306_06598</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-92ed0526ac5c41a6aafcfe53dfe7ae99c3713032424460f167da7480e0d37ca63</originalsourceid><addsrcrecordid>eNotzs1ugkAUBeDZuGjUB-jKeQHoZX7BHSW0NsGYEPbkZuaOIVEw0Kp9-0bs6pzFycnH2GsCsUq1hjcc7901FhJMDEZn6QvT9fBe1s2N6HvLc_7ovML--INH4vvB04mHYeT1cMa-w57Py2nFFgFPE63_c8maj7IpdlF1-Pwq8ipCY9MoE-RBC4NOO5WgQQwukJY-kEXKMidtIkEKJZQyEBJjPVqVAoGX1qGRS7Z53s7u9jJ2Zxx_24e_nf3yD57LPiw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>RoBERTweet: A BERT Language Model for Romanian Tweets</title><source>arXiv.org</source><creator>Tăiatu, Iulian-Marius ; Avram, Andrei-Marius ; Cercel, Dumitru-Clementin ; Pop, Florin</creator><creatorcontrib>Tăiatu, Iulian-Marius ; Avram, Andrei-Marius ; Cercel, Dumitru-Clementin ; Pop, Florin</creatorcontrib><description>Developing natural language processing (NLP) systems for social media
analysis remains an important topic in artificial intelligence research. This
article introduces RoBERTweet, the first Transformer architecture trained on
Romanian tweets. Our RoBERTweet comes in two versions, following the base and
large architectures of BERT. The corpus used for pre-training the models
represents a novelty for the Romanian NLP community and consists of all tweets
collected from 2008 to 2022. Experiments show that RoBERTweet models outperform
the previous general-domain Romanian and multilingual language models on three
NLP tasks with tweet inputs: emotion detection, sexist language identification,
and named entity recognition. We make our models and the newly created corpus
of Romanian tweets freely available.</description><identifier>DOI: 10.48550/arxiv.2306.06598</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2023-06</creationdate><rights>http://creativecommons.org/licenses/by-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2306.06598$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2306.06598$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Tăiatu, Iulian-Marius</creatorcontrib><creatorcontrib>Avram, Andrei-Marius</creatorcontrib><creatorcontrib>Cercel, Dumitru-Clementin</creatorcontrib><creatorcontrib>Pop, Florin</creatorcontrib><title>RoBERTweet: A BERT Language Model for Romanian Tweets</title><description>Developing natural language processing (NLP) systems for social media
analysis remains an important topic in artificial intelligence research. This
article introduces RoBERTweet, the first Transformer architecture trained on
Romanian tweets. Our RoBERTweet comes in two versions, following the base and
large architectures of BERT. The corpus used for pre-training the models
represents a novelty for the Romanian NLP community and consists of all tweets
collected from 2008 to 2022. Experiments show that RoBERTweet models outperform
the previous general-domain Romanian and multilingual language models on three
NLP tasks with tweet inputs: emotion detection, sexist language identification,
and named entity recognition. We make our models and the newly created corpus
of Romanian tweets freely available.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzs1ugkAUBeDZuGjUB-jKeQHoZX7BHSW0NsGYEPbkZuaOIVEw0Kp9-0bs6pzFycnH2GsCsUq1hjcc7901FhJMDEZn6QvT9fBe1s2N6HvLc_7ovML--INH4vvB04mHYeT1cMa-w57Py2nFFgFPE63_c8maj7IpdlF1-Pwq8ipCY9MoE-RBC4NOO5WgQQwukJY-kEXKMidtIkEKJZQyEBJjPVqVAoGX1qGRS7Z53s7u9jJ2Zxx_24e_nf3yD57LPiw</recordid><startdate>20230611</startdate><enddate>20230611</enddate><creator>Tăiatu, Iulian-Marius</creator><creator>Avram, Andrei-Marius</creator><creator>Cercel, Dumitru-Clementin</creator><creator>Pop, Florin</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230611</creationdate><title>RoBERTweet: A BERT Language Model for Romanian Tweets</title><author>Tăiatu, Iulian-Marius ; Avram, Andrei-Marius ; Cercel, Dumitru-Clementin ; Pop, Florin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-92ed0526ac5c41a6aafcfe53dfe7ae99c3713032424460f167da7480e0d37ca63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Tăiatu, Iulian-Marius</creatorcontrib><creatorcontrib>Avram, Andrei-Marius</creatorcontrib><creatorcontrib>Cercel, Dumitru-Clementin</creatorcontrib><creatorcontrib>Pop, Florin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Tăiatu, Iulian-Marius</au><au>Avram, Andrei-Marius</au><au>Cercel, Dumitru-Clementin</au><au>Pop, Florin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>RoBERTweet: A BERT Language Model for Romanian Tweets</atitle><date>2023-06-11</date><risdate>2023</risdate><abstract>Developing natural language processing (NLP) systems for social media
analysis remains an important topic in artificial intelligence research. This
article introduces RoBERTweet, the first Transformer architecture trained on
Romanian tweets. Our RoBERTweet comes in two versions, following the base and
large architectures of BERT. The corpus used for pre-training the models
represents a novelty for the Romanian NLP community and consists of all tweets
collected from 2008 to 2022. Experiments show that RoBERTweet models outperform
the previous general-domain Romanian and multilingual language models on three
NLP tasks with tweet inputs: emotion detection, sexist language identification,
and named entity recognition. We make our models and the newly created corpus
of Romanian tweets freely available.</abstract><doi>10.48550/arxiv.2306.06598</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2306.06598 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2306_06598 |
source | arXiv.org |
subjects | Computer Science - Computation and Language |
title | RoBERTweet: A BERT Language Model for Romanian Tweets |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T07%3A25%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=RoBERTweet:%20A%20BERT%20Language%20Model%20for%20Romanian%20Tweets&rft.au=T%C4%83iatu,%20Iulian-Marius&rft.date=2023-06-11&rft_id=info:doi/10.48550/arxiv.2306.06598&rft_dat=%3Carxiv_GOX%3E2306_06598%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |