Improving Uncertainty Calibration of Deep Neural Networks via Truth Discovery and Geometric Optimization
Deep Neural Networks (DNNs), despite their tremendous success in recent years, could still cast doubts on their predictions due to the intrinsic uncertainty associated with their learning process. Ensemble techniques and post-hoc calibrations are two types of approaches that have individually shown...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep Neural Networks (DNNs), despite their tremendous success in recent
years, could still cast doubts on their predictions due to the intrinsic
uncertainty associated with their learning process. Ensemble techniques and
post-hoc calibrations are two types of approaches that have individually shown
promise in improving the uncertainty calibration of DNNs. However, the
synergistic effect of the two types of methods has not been well explored. In
this paper, we propose a truth discovery framework to integrate ensemble-based
and post-hoc calibration methods. Using the geometric variance of the ensemble
candidates as a good indicator for sample uncertainty, we design an
accuracy-preserving truth estimator with provably no accuracy drop.
Furthermore, we show that post-hoc calibration can also be enhanced by truth
discovery-regularized optimization. On large-scale datasets including CIFAR and
ImageNet, our method shows consistent improvement against state-of-the-art
calibration approaches on both histogram-based and kernel density-based
evaluation metrics. Our codes are available at
https://github.com/horsepurve/truly-uncertain. |
---|---|
DOI: | 10.48550/arxiv.2106.14662 |