Distilling Model Failures as Directions in Latent Space
Existing methods for isolating hard subpopulations and spurious correlations in datasets often require human intervention. This can make these methods labor-intensive and dataset-specific. To address these shortcomings, we present a scalable method for automatically distilling a model's failure...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Existing methods for isolating hard subpopulations and spurious correlations
in datasets often require human intervention. This can make these methods
labor-intensive and dataset-specific. To address these shortcomings, we present
a scalable method for automatically distilling a model's failure modes.
Specifically, we harness linear classifiers to identify consistent error
patterns, and, in turn, induce a natural representation of these failure modes
as directions within the feature space. We demonstrate that this framework
allows us to discover and automatically caption challenging subpopulations
within the training dataset. Moreover, by combining our framework with
off-the-shelf diffusion models, we can generate images that are especially
challenging for the analyzed model, and thus can be used to perform synthetic
data augmentation that helps remedy the model's failure modes. Code available
at https://github.com/MadryLab/failure-directions |
---|---|
DOI: | 10.48550/arxiv.2206.14754 |