Memes in the Wild: Assessing the Generalizability of the Hateful Memes Challenge Dataset
Hateful memes pose a unique challenge for current machine learning systems because their message is derived from both text- and visual-modalities. To this effect, Facebook released the Hateful Memes Challenge, a dataset of memes with pre-extracted text captions, but it is unclear whether these synth...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Hateful memes pose a unique challenge for current machine learning systems
because their message is derived from both text- and visual-modalities. To this
effect, Facebook released the Hateful Memes Challenge, a dataset of memes with
pre-extracted text captions, but it is unclear whether these synthetic examples
generalize to `memes in the wild'. In this paper, we collect hateful and
non-hateful memes from Pinterest to evaluate out-of-sample performance on
models pre-trained on the Facebook dataset. We find that memes in the wild
differ in two key aspects: 1) Captions must be extracted via OCR, injecting
noise and diminishing performance of multimodal models, and 2) Memes are more
diverse than `traditional memes', including screenshots of conversations or
text on a plain background. This paper thus serves as a reality check for the
current benchmark of hateful meme detection and its applicability for detecting
real world hate. |
---|---|
DOI: | 10.48550/arxiv.2107.04313 |