Well-Calibrated and Sharp Interpretable Multi-Class Models

Interpretable models make it possible to understand individual predictions, and are in many domains considered mandatory for user acceptance and trust. If coupled with communicated algorithmic confidence, interpretable models become even more informative, also making it possible to assess and compar...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Johansson, Ulf, Löfström, Tuwe, Boström, Henrik
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Interpretable models make it possible to understand individual predictions, and are in many domains considered mandatory for user acceptance and trust. If coupled with communicated algorithmic confidence, interpretable models become even more informative, also making it possible to assess and compare the confidence expressed by the models in different predictions. To earn a user’s appropriate trust, however, the communicated algorithmic confidence must also be well-calibrated. In this paper, we suggest a novel way of extending Venn-Abers predictors to multi-class problems. The approach is applied to decision trees, providing well-calibrated probability intervals in the leaves. The result is one interpretable model with valid and sharp probability intervals, ready for inspection and analysis. In the experimentation, the proposed method is verified using 20 publicly available data sets showing that the generated models are indeed well-calibrated.
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-85529-1_16