When does Bias Transfer in Transfer Learning?
Using transfer learning to adapt a pre-trained "source model" to a downstream "target task" can dramatically increase performance with seemingly no downside. In this work, we demonstrate that there can exist a downside after all: bias transfer, or the tendency for biases of the s...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Using transfer learning to adapt a pre-trained "source model" to a downstream
"target task" can dramatically increase performance with seemingly no downside.
In this work, we demonstrate that there can exist a downside after all: bias
transfer, or the tendency for biases of the source model to persist even after
adapting the model to the target class. Through a combination of synthetic and
natural experiments, we show that bias transfer both (a) arises in realistic
settings (such as when pre-training on ImageNet or other standard datasets) and
(b) can occur even when the target dataset is explicitly de-biased. As
transfer-learned models are increasingly deployed in the real world, our work
highlights the importance of understanding the limitations of pre-trained
source models. Code is available at https://github.com/MadryLab/bias-transfer |
---|---|
DOI: | 10.48550/arxiv.2207.02842 |