Evaluation of Out-of-Distribution Detection Performance on Autonomous Driving Datasets
Safety measures need to be systemically investigated to what extent they evaluate the intended performance of Deep Neural Networks (DNNs) for critical applications. Due to a lack of verification methods for high-dimensional DNNs, a trade-off is needed between accepted performance and handling of out...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Safety measures need to be systemically investigated to what extent they
evaluate the intended performance of Deep Neural Networks (DNNs) for critical
applications. Due to a lack of verification methods for high-dimensional DNNs,
a trade-off is needed between accepted performance and handling of
out-of-distribution (OOD) samples.
This work evaluates rejecting outputs from semantic segmentation DNNs by
applying a Mahalanobis distance (MD) based on the most probable
class-conditional Gaussian distribution for the predicted class as an OOD
score. The evaluation follows three DNNs trained on the Cityscapes dataset and
tested on four automotive datasets and finds that classification risk can
drastically be reduced at the cost of pixel coverage, even when applied on
unseen datasets. The applicability of our findings will support legitimizing
safety measures and motivate their usage when arguing for safe usage of DNNs in
automotive perception. |
---|---|
DOI: | 10.48550/arxiv.2401.17013 |