Discriminative out-of-distribution detection for semantic segmentation
Most classification and segmentation datasets assume a closed-world scenario in which predictions are expressed as distribution over a predetermined set of visual classes. However, such assumption implies unavoidable and often unnoticeable failures in presence of out-of-distribution (OOD) input. The...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Most classification and segmentation datasets assume a closed-world scenario
in which predictions are expressed as distribution over a predetermined set of
visual classes. However, such assumption implies unavoidable and often
unnoticeable failures in presence of out-of-distribution (OOD) input. These
failures are bound to happen in most real-life applications since current
visual ontologies are far from being comprehensive. We propose to address this
issue by discriminative detection of OOD pixels in input data. Different from
recent approaches, we avoid to bring any decisions by only observing the
training dataset of the primary model trained to solve the desired computer
vision task. Instead, we train a dedicated OOD model which discriminates the
primary training set from a much larger "background" dataset which approximates
the variety of the visual world. We perform our experiments on high resolution
natural images in a dense prediction setup. We use several road driving
datasets as our training distribution, while we approximate the background
distribution with the ILSVRC dataset. We evaluate our approach on WildDash
test, which is currently the only public test dataset that includes
out-of-distribution images. The obtained results show that the proposed
approach succeeds to identify out-of-distribution pixels while outperforming
previous work by a wide margin. |
---|---|
DOI: | 10.48550/arxiv.1808.07703 |