Toxicity Detection can be Sensitive to the Conversational Context
User posts whose perceived toxicity depends on the conversational context are rare in current toxicity detection datasets. Hence, toxicity detectors trained on existing datasets will also tend to disregard context, making the detection of context-sensitive toxicity harder when it does occur. We cons...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | User posts whose perceived toxicity depends on the conversational context are
rare in current toxicity detection datasets. Hence, toxicity detectors trained
on existing datasets will also tend to disregard context, making the detection
of context-sensitive toxicity harder when it does occur. We construct and
publicly release a dataset of 10,000 posts with two kinds of toxicity labels:
(i) annotators considered each post with the previous one as context; and (ii)
annotators had no additional context. Based on this, we introduce a new task,
context sensitivity estimation, which aims to identify posts whose perceived
toxicity changes if the context (previous post) is also considered. We then
evaluate machine learning systems on this task, showing that classifiers of
practical quality can be developed, and we show that data augmentation with
knowledge distillation can improve the performance further. Such systems could
be used to enhance toxicity detection datasets with more context-dependent
posts, or to suggest when moderators should consider the parent posts, which
often may be unnecessary and may otherwise introduce significant additional
cost. |
---|---|
DOI: | 10.48550/arxiv.2111.10223 |