Kungfupanda at SemEval-2020 Task 12: BERT-Based Multi-Task Learning for Offensive Language Detection
Nowadays, offensive content in social media has become a serious problem, and automatically detecting offensive language is an essential task. In this paper, we build an offensive language detection system, which combines multi-task learning with BERT-based models. Using a pre-trained language model...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Nowadays, offensive content in social media has become a serious problem, and
automatically detecting offensive language is an essential task. In this paper,
we build an offensive language detection system, which combines multi-task
learning with BERT-based models. Using a pre-trained language model such as
BERT, we can effectively learn the representations for noisy text in social
media. Besides, to boost the performance of offensive language detection, we
leverage the supervision signals from other related tasks. In the
OffensEval-2020 competition, our model achieves 91.51% F1 score in English
Sub-task A, which is comparable to the first place (92.23%F1). An empirical
analysis is provided to explain the effectiveness of our approaches. |
---|---|
DOI: | 10.48550/arxiv.2004.13432 |