Revitalizing Reconstruction Models for Multi-class Anomaly Detection via Class-Aware Contrastive Learning
For anomaly detection (AD), early approaches often train separate models for individual classes, yielding high performance but posing challenges in scalability and resource management. Recent efforts have shifted toward training a single model capable of handling multiple classes. However, directly...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | For anomaly detection (AD), early approaches often train separate models for
individual classes, yielding high performance but posing challenges in
scalability and resource management. Recent efforts have shifted toward
training a single model capable of handling multiple classes. However, directly
extending early AD methods to multi-class settings often results in degraded
performance. In this paper, we analyze this degradation observed in
reconstruction-based methods, identifying two key issues: catastrophic
forgetting and inter-class confusion. To this end, we propose a plug-and-play
modification by incorporating class-aware contrastive learning (CL). By
explicitly leveraging raw object category information (e.g., carpet or wood) as
supervised signals, we apply local CL to fine-tune multiscale features and
global CL to learn more compact feature representations of normal patterns,
thereby effectively adapting the models to multi-class settings. Experiments
across four datasets (over 60 categories) verify the effectiveness of our
approach, yielding significant improvements and superior performance compared
to advanced methods. Notably, ablation studies show that even using
pseudo-class labels can achieve comparable performance. |
---|---|
DOI: | 10.48550/arxiv.2412.04769 |