Approaching Neural Network Uncertainty Realism
Statistical models are inherently uncertain. Quantifying or at least upper-bounding their uncertainties is vital for safety-critical systems such as autonomous vehicles. While standard neural networks do not report this information, several approaches exist to integrate uncertainty estimates into th...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Statistical models are inherently uncertain. Quantifying or at least
upper-bounding their uncertainties is vital for safety-critical systems such as
autonomous vehicles. While standard neural networks do not report this
information, several approaches exist to integrate uncertainty estimates into
them. Assessing the quality of these uncertainty estimates is not
straightforward, as no direct ground truth labels are available. Instead,
implicit statistical assessments are required. For regression, we propose to
evaluate uncertainty realism -- a strict quality criterion -- with a
Mahalanobis distance-based statistical test. An empirical evaluation reveals
the need for uncertainty measures that are appropriate to upper-bound
heavy-tailed empirical errors. Alongside, we transfer the variational U-Net
classification architecture to standard supervised image-to-image tasks. We
adopt it to the automotive domain and show that it significantly improves
uncertainty realism compared to a plain encoder-decoder model. |
---|---|
DOI: | 10.48550/arxiv.2101.02974 |