Comprehensive OOD Detection Improvements
As machine learning becomes increasingly prevalent in impactful decisions, recognizing when inference data is outside the model's expected input distribution is paramount for giving context to predictions. Out-of-distribution (OOD) detection methods have been created for this task. Such methods...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As machine learning becomes increasingly prevalent in impactful decisions,
recognizing when inference data is outside the model's expected input
distribution is paramount for giving context to predictions.
Out-of-distribution (OOD) detection methods have been created for this task.
Such methods can be split into representation-based or logit-based methods from
whether they respectively utilize the model's embeddings or predictions for OOD
detection. In contrast to most papers which solely focus on one such group, we
address both. We employ dimensionality reduction on feature embeddings in
representation-based methods for both time speedups and improved performance.
Additionally, we propose DICE-COL, a modification of the popular logit-based
method Directed Sparsification (DICE) that resolves an unnoticed flaw. We
demonstrate the effectiveness of our methods on the OpenOODv1.5 benchmark
framework, where they significantly improve performance and set
state-of-the-art results. |
---|---|
DOI: | 10.48550/arxiv.2401.10176 |