Detecting Anomalies in Semantic Segmentation with Prototypes
Traditional semantic segmentation methods can recognize at test time only the classes that are present in the training set. This is a significant limitation, especially for semantic segmentation algorithms mounted on intelligent autonomous systems, deployed in realistic settings. Regardless of how m...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Traditional semantic segmentation methods can recognize at test time only the
classes that are present in the training set. This is a significant limitation,
especially for semantic segmentation algorithms mounted on intelligent
autonomous systems, deployed in realistic settings. Regardless of how many
classes the system has seen at training time, it is inevitable that unexpected,
unknown objects will appear at test time. The failure in identifying such
anomalies may lead to incorrect, even dangerous behaviors of the autonomous
agent equipped with such segmentation model when deployed in the real world.
Current state of the art of anomaly segmentation uses generative models,
exploiting their incapability to reconstruct patterns unseen during training.
However, training these models is expensive, and their generated artifacts may
create false anomalies. In this paper we take a different route and we propose
to address anomaly segmentation through prototype learning. Our intuition is
that anomalous pixels are those that are dissimilar to all class prototypes
known by the model. We extract class prototypes from the training data in a
lightweight manner using a cosine similarity-based classifier. Experiments on
StreetHazards show that our approach achieves the new state of the art, with a
significant margin over previous works, despite the reduced computational
overhead. Code is available at https://github.com/DarioFontanel/PAnS. |
---|---|
DOI: | 10.48550/arxiv.2106.00472 |