Training, Architecture, and Prior for Deterministic Uncertainty Methods
Accurate and efficient uncertainty estimation is crucial to build reliable Machine Learning (ML) models capable to provide calibrated uncertainty estimates, generalize and detect Out-Of-Distribution (OOD) datasets. To this end, Deterministic Uncertainty Methods (DUMs) is a promising model family cap...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Accurate and efficient uncertainty estimation is crucial to build reliable
Machine Learning (ML) models capable to provide calibrated uncertainty
estimates, generalize and detect Out-Of-Distribution (OOD) datasets. To this
end, Deterministic Uncertainty Methods (DUMs) is a promising model family
capable to perform uncertainty estimation in a single forward pass. This work
investigates important design choices in DUMs: (1) we show that training
schemes decoupling the core architecture and the uncertainty head schemes can
significantly improve uncertainty performances. (2) we demonstrate that the
core architecture expressiveness is crucial for uncertainty performance and
that additional architecture constraints to avoid feature collapse can
deteriorate the trade-off between OOD generalization and detection. (3)
Contrary to other Bayesian models, we show that the prior defined by DUMs do
not have a strong effect on the final performances. |
---|---|
DOI: | 10.48550/arxiv.2303.05796 |