Robustness May be More Brittle than We Think under Different Degrees of Distribution Shifts
Out-of-distribution (OOD) generalization is a complicated problem due to the idiosyncrasies of possible distribution shifts between training and test domains. Most benchmarks employ diverse datasets to address this issue; however, the degree of the distribution shift between the training domains and...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Out-of-distribution (OOD) generalization is a complicated problem due to the
idiosyncrasies of possible distribution shifts between training and test
domains. Most benchmarks employ diverse datasets to address this issue;
however, the degree of the distribution shift between the training domains and
the test domains of each dataset remains largely fixed. This may lead to biased
conclusions that either underestimate or overestimate the actual OOD
performance of a model. Our study delves into a more nuanced evaluation setting
that covers a broad range of shift degrees. We show that the robustness of
models can be quite brittle and inconsistent under different degrees of
distribution shifts, and therefore one should be more cautious when drawing
conclusions from evaluations under a limited range of degrees. In addition, we
observe that large-scale pre-trained models, such as CLIP, are sensitive to
even minute distribution shifts of novel downstream tasks. This indicates that
while pre-trained representations may help improve downstream in-distribution
performance, they could have minimal or even adverse effects on generalization
in certain OOD scenarios of the downstream task if not used properly. In light
of these findings, we encourage future research to conduct evaluations across a
broader range of shift degrees whenever possible. |
---|---|
DOI: | 10.48550/arxiv.2310.06622 |