Examining Pathological Bias in a Generative Adversarial Network Discriminator: A Case Study on a StyleGAN3 Model
Generative adversarial networks (GANs) generate photorealistic faces that are often indistinguishable by humans from real faces. While biases in machine learning models are often assumed to be due to biases in training data, we find pathological internal color and luminance biases in the discriminat...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Generative adversarial networks (GANs) generate photorealistic faces that are
often indistinguishable by humans from real faces. While biases in machine
learning models are often assumed to be due to biases in training data, we find
pathological internal color and luminance biases in the discriminator of a
pre-trained StyleGAN3-r model that are not explicable by the training data. We
also find that the discriminator systematically stratifies scores by both
image- and face-level qualities and that this disproportionately affects images
across gender, race, and other categories. We examine axes common in research
on stereotyping in social psychology. |
---|---|
DOI: | 10.48550/arxiv.2402.09786 |