Self-Distribution Distillation: Efficient Uncertainty Estimation
Deep learning is increasingly being applied in safety-critical domains. For these scenarios it is important to know the level of uncertainty in a model's prediction to ensure appropriate decisions are made by the system. Deep ensembles are the de-facto standard approach to obtaining various mea...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep learning is increasingly being applied in safety-critical domains. For
these scenarios it is important to know the level of uncertainty in a model's
prediction to ensure appropriate decisions are made by the system. Deep
ensembles are the de-facto standard approach to obtaining various measures of
uncertainty. However, ensembles often significantly increase the resources
required in the training and/or deployment phases. Approaches have been
developed that typically address the costs in one of these phases. In this work
we propose a novel training approach, self-distribution distillation (S2D),
which is able to efficiently train a single model that can estimate
uncertainties. Furthermore it is possible to build ensembles of these models
and apply hierarchical ensemble distillation approaches. Experiments on
CIFAR-100 showed that S2D models outperformed standard models and Monte-Carlo
dropout. Additional out-of-distribution detection experiments on LSUN, Tiny
ImageNet, SVHN showed that even a standard deep ensemble can be outperformed
using S2D based ensembles and novel distilled models. |
---|---|
DOI: | 10.48550/arxiv.2203.08295 |