URL: A Representation Learning Benchmark for Transferable Uncertainty Estimates
Representation learning has significantly driven the field to develop pretrained models that can act as a valuable starting point when transferring to new datasets. With the rising demand for reliable machine learning and uncertainty quantification, there is a need for pretrained models that not onl...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Representation learning has significantly driven the field to develop
pretrained models that can act as a valuable starting point when transferring
to new datasets. With the rising demand for reliable machine learning and
uncertainty quantification, there is a need for pretrained models that not only
provide embeddings but also transferable uncertainty estimates. To guide the
development of such models, we propose the Uncertainty-aware Representation
Learning (URL) benchmark. Besides the transferability of the representations,
it also measures the zero-shot transferability of the uncertainty estimate
using a novel metric. We apply URL to evaluate eleven uncertainty quantifiers
that are pretrained on ImageNet and transferred to eight downstream datasets.
We find that approaches that focus on the uncertainty of the representation
itself or estimate the prediction risk directly outperform those that are based
on the probabilities of upstream classes. Yet, achieving transferable
uncertainty quantification remains an open challenge. Our findings indicate
that it is not necessarily in conflict with traditional representation learning
goals. Code is provided under https://github.com/mkirchhof/url . |
---|---|
DOI: | 10.48550/arxiv.2307.03810 |