Detection of out-of-distribution samples using binary neuron activation patterns
Deep neural networks (DNN) have outstanding performance in various applications. Despite numerous efforts of the research community, out-of-distribution (OOD) samples remain a significant limitation of DNN classifiers. The ability to identify previously unseen inputs as novel is crucial in safety-cr...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep neural networks (DNN) have outstanding performance in various
applications. Despite numerous efforts of the research community,
out-of-distribution (OOD) samples remain a significant limitation of DNN
classifiers. The ability to identify previously unseen inputs as novel is
crucial in safety-critical applications such as self-driving cars, unmanned
aerial vehicles, and robots. Existing approaches to detect OOD samples treat a
DNN as a black box and evaluate the confidence score of the output predictions.
Unfortunately, this method frequently fails, because DNNs are not trained to
reduce their confidence for OOD inputs. In this work, we introduce a novel
method for OOD detection. Our method is motivated by theoretical analysis of
neuron activation patterns (NAP) in ReLU-based architectures. The proposed
method does not introduce a high computational overhead due to the binary
representation of the activation patterns extracted from convolutional layers.
The extensive empirical evaluation proves its high performance on various DNN
architectures and seven image datasets. |
---|---|
DOI: | 10.48550/arxiv.2212.14268 |