Confidence Calibration with Bounded Error Using Transformations

As machine learning techniques become widely adopted in new domains, especially in safety-critical systems such as autonomous vehicles, it is crucial to provide accurate output uncertainty estimation. As a result, many approaches have been proposed to calibrate neural networks to accurately estimate...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Jang, Sooyong, Ivanov, Radoslav, Lee, Insup, Weimer, James
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:As machine learning techniques become widely adopted in new domains, especially in safety-critical systems such as autonomous vehicles, it is crucial to provide accurate output uncertainty estimation. As a result, many approaches have been proposed to calibrate neural networks to accurately estimate the likelihood of misclassification. However, while these methods achieve low calibration error, there is space for further improvement, especially in large-dimensional settings such as ImageNet. In this paper, we introduce a calibration algorithm, named Hoki, that works by applying random transformations to the neural network logits. We provide a sufficient condition for perfect calibration based on the number of label prediction changes observed after applying the transformations. We perform experiments on multiple datasets and show that the proposed approach generally outperforms state-of-the-art calibration algorithms across multiple datasets and models, especially on the challenging ImageNet dataset. Finally, Hoki is scalable as well, as it requires comparable execution time to that of temperature scaling.
DOI:10.48550/arxiv.2102.12680