HateMonitors: Language Agnostic Abuse Detection in Social Media
Reducing hateful and offensive content in online social media pose a dual problem for the moderators. On the one hand, rigid censorship on social media cannot be imposed. On the other, the free flow of such content cannot be allowed. Hence, we require efficient abusive language detection system to d...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reducing hateful and offensive content in online social media pose a dual
problem for the moderators. On the one hand, rigid censorship on social media
cannot be imposed. On the other, the free flow of such content cannot be
allowed. Hence, we require efficient abusive language detection system to
detect such harmful content in social media. In this paper, we present our
machine learning model, HateMonitor, developed for Hate Speech and Offensive
Content Identification in Indo-European Languages (HASOC), a shared task at
FIRE 2019. We have used a Gradient Boosting model, along with BERT and LASER
embeddings, to make the system language agnostic. Our model came at First
position for the German sub-task A. We have also made our model public at
https://github.com/punyajoy/HateMonitors-HASOC . |
---|---|
DOI: | 10.48550/arxiv.1909.12642 |