Combining Diverse Feature Priors
To improve model generalization, model designers often restrict the features that their models use, either implicitly or explicitly. In this work, we explore the design space of leveraging such feature priors by viewing them as distinct perspectives on the data. Specifically, we find that models tra...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | To improve model generalization, model designers often restrict the features
that their models use, either implicitly or explicitly. In this work, we
explore the design space of leveraging such feature priors by viewing them as
distinct perspectives on the data. Specifically, we find that models trained
with diverse sets of feature priors have less overlapping failure modes, and
can thus be combined more effectively. Moreover, we demonstrate that jointly
training such models on additional (unlabeled) data allows them to correct each
other's mistakes, which, in turn, leads to better generalization and resilience
to spurious correlations. Code available at
https://github.com/MadryLab/copriors |
---|---|
DOI: | 10.48550/arxiv.2110.08220 |