PeLLE: Encoder-based language models for Brazilian Portuguese based on open data
In this paper we present PeLLE, a family of large language models based on the RoBERTa architecture, for Brazilian Portuguese, trained on curated, open data from the Carolina corpus. Aiming at reproducible results, we describe details of the pretraining of the models. We also evaluate PeLLE models a...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper we present PeLLE, a family of large language models based on
the RoBERTa architecture, for Brazilian Portuguese, trained on curated, open
data from the Carolina corpus. Aiming at reproducible results, we describe
details of the pretraining of the models. We also evaluate PeLLE models against
a set of existing multilingual and PT-BR refined pretrained Transformer-based
LLM encoders, contrasting performance of large versus smaller-but-curated
pretrained models in several downstream tasks. We conclude that several tasks
perform better with larger models, but some tasks benefit from
smaller-but-curated data in its pretraining. |
---|---|
DOI: | 10.48550/arxiv.2402.19204 |