On the Unintended Social Bias of Training Language Generation Models with Data from Local Media
There are concerns that neural language models may preserve some of the stereotypes of the underlying societies that generate the large corpora needed to train these models. For example, gender bias is a significant problem when generating text, and its unintended memorization could impact the user...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | There are concerns that neural language models may preserve some of the
stereotypes of the underlying societies that generate the large corpora needed
to train these models. For example, gender bias is a significant problem when
generating text, and its unintended memorization could impact the user
experience of many applications (e.g., the smart-compose feature in Gmail).
In this paper, we introduce a novel architecture that decouples the
representation learning of a neural model from its memory management role. This
architecture allows us to update a memory module with an equal ratio across
gender types addressing biased correlations directly in the latent space. We
experimentally show that our approach can mitigate the gender bias
amplification in the automatic generation of articles news while providing
similar perplexity values when extending the Sequence2Sequence architecture. |
---|---|
DOI: | 10.48550/arxiv.1911.00461 |