Predicting Out-of-Distribution Error with Confidence Optimal Transport
Out-of-distribution (OOD) data poses serious challenges in deployed machine learning models as even subtle changes could incur significant performance drops. Being able to estimate a model's performance on test data is important in practice as it indicates when to trust to model's decision...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Out-of-distribution (OOD) data poses serious challenges in deployed machine
learning models as even subtle changes could incur significant performance
drops. Being able to estimate a model's performance on test data is important
in practice as it indicates when to trust to model's decisions. We present a
simple yet effective method to predict a model's performance on an unknown
distribution without any addition annotation. Our approach is rooted in the
Optimal Transport theory, viewing test samples' output softmax scores from deep
neural networks as empirical samples from an unknown distribution. We show that
our method, Confidence Optimal Transport (COT), provides robust estimates of a
model's performance on a target domain. Despite its simplicity, our method
achieves state-of-the-art results on three benchmark datasets and outperforms
existing methods by a large margin. |
---|---|
DOI: | 10.48550/arxiv.2302.05018 |