Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations
Neural network classifiers can largely rely on simple spurious features, such as backgrounds, to make predictions. However, even in these cases, we show that they still often learn core features associated with the desired attributes of the data, contrary to recent findings. Inspired by this insight...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Neural network classifiers can largely rely on simple spurious features, such
as backgrounds, to make predictions. However, even in these cases, we show that
they still often learn core features associated with the desired attributes of
the data, contrary to recent findings. Inspired by this insight, we demonstrate
that simple last layer retraining can match or outperform state-of-the-art
approaches on spurious correlation benchmarks, but with profoundly lower
complexity and computational expenses. Moreover, we show that last layer
retraining on large ImageNet-trained models can also significantly reduce
reliance on background and texture information, improving robustness to
covariate shift, after only minutes of training on a single GPU. |
---|---|
DOI: | 10.48550/arxiv.2204.02937 |