Removing Spurious Concepts from Neural Network Representations via Joint Subspace Estimation
Out-of-distribution generalization in neural networks is often hampered by spurious correlations. A common strategy is to mitigate this by removing spurious concepts from the neural network representation of the data. Existing concept-removal methods tend to be overzealous by inadvertently eliminati...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Out-of-distribution generalization in neural networks is often hampered by
spurious correlations. A common strategy is to mitigate this by removing
spurious concepts from the neural network representation of the data. Existing
concept-removal methods tend to be overzealous by inadvertently eliminating
features associated with the main task of the model, thereby harming model
performance. We propose an iterative algorithm that separates spurious from
main-task concepts by jointly identifying two low-dimensional orthogonal
subspaces in the neural network representation. We evaluate the algorithm on
benchmark datasets for computer vision (Waterbirds, CelebA) and natural
language processing (MultiNLI), and show that it outperforms existing concept
removal methods |
---|---|
DOI: | 10.48550/arxiv.2310.11991 |