Logit-Based Ensemble Distribution Distillation for Robust Autoregressive Sequence Uncertainties
Efficiently and reliably estimating uncertainty is an important objective in deep learning. It is especially pertinent to autoregressive sequence tasks, where training and inference costs are typically very high. However, existing research has predominantly focused on tasks with static data such as...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Efficiently and reliably estimating uncertainty is an important objective in
deep learning. It is especially pertinent to autoregressive sequence tasks,
where training and inference costs are typically very high. However, existing
research has predominantly focused on tasks with static data such as image
classification. In this work, we investigate Ensemble Distribution Distillation
(EDD) applied to large-scale natural language sequence-to-sequence data. EDD
aims to compress the superior uncertainty performance of an expensive (teacher)
ensemble into a cheaper (student) single model. Importantly, the ability to
separate knowledge (epistemic) and data (aleatoric) uncertainty is retained.
Existing probability-space approaches to EDD, however, are difficult to scale
to large vocabularies. We show, for modern transformer architectures on
large-scale translation tasks, that modelling the ensemble logits, instead of
softmax probabilities, leads to significantly better students. Moreover, the
students surprisingly even outperform Deep Ensembles by up to ~10% AUROC on
out-of-distribution detection, whilst matching them at in-distribution
translation. |
---|---|
DOI: | 10.48550/arxiv.2305.10384 |