Transformer Encoder for Social Science
High-quality text data has become an important data source for social scientists. We have witnessed the success of pretrained deep neural network models, such as BERT and RoBERTa, in recent social science research. In this paper, we propose a compact pretrained deep neural network, Transformer Encod...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | High-quality text data has become an important data source for social
scientists. We have witnessed the success of pretrained deep neural network
models, such as BERT and RoBERTa, in recent social science research. In this
paper, we propose a compact pretrained deep neural network, Transformer Encoder
for Social Science (TESS), explicitly designed to tackle text processing tasks
in social science research. Using two validation tests, we demonstrate that
TESS outperforms BERT and RoBERTa by 16.7% on average when the number of
training samples is limited ( |
---|---|
DOI: | 10.48550/arxiv.2208.08005 |