Unlocking Bias Detection: Leveraging Transformer-Based Models for Content Analysis
Bias detection in text is crucial for combating the spread of negative stereotypes, misinformation, and biased decision-making. Traditional language models frequently face challenges in generalizing beyond their training data and are typically designed for a single task, often focusing on bias detec...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Bias detection in text is crucial for combating the spread of negative
stereotypes, misinformation, and biased decision-making. Traditional language
models frequently face challenges in generalizing beyond their training data
and are typically designed for a single task, often focusing on bias detection
at the sentence level. To address this, we present the Contextualized
Bi-Directional Dual Transformer (CBDT) \textcolor{green}{\faLeaf} classifier.
This model combines two complementary transformer networks: the Context
Transformer and the Entity Transformer, with a focus on improving bias
detection capabilities. We have prepared a dataset specifically for training
these models to identify and locate biases in texts. Our evaluations across
various datasets demonstrate CBDT \textcolor{green} effectiveness in
distinguishing biased narratives from neutral ones and identifying specific
biased terms. This work paves the way for applying the CBDT \textcolor{green}
model in various linguistic and cultural contexts, enhancing its utility in
bias detection efforts. We also make the annotated dataset available for
research purposes. |
---|---|
DOI: | 10.48550/arxiv.2310.00347 |