Fascinating Supervisory Signals and Where to Find Them: Deep Anomaly Detection with Scale Learning
Due to the unsupervised nature of anomaly detection, the key to fueling deep models is finding supervisory signals. Different from current reconstruction-guided generative models and transformation-based contrastive models, we devise novel data-driven supervision for tabular data by introducing a ch...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Due to the unsupervised nature of anomaly detection, the key to fueling deep
models is finding supervisory signals. Different from current
reconstruction-guided generative models and transformation-based contrastive
models, we devise novel data-driven supervision for tabular data by introducing
a characteristic -- scale -- as data labels. By representing varied sub-vectors
of data instances, we define scale as the relationship between the
dimensionality of original sub-vectors and that of representations. Scales
serve as labels attached to transformed representations, thus offering ample
labeled data for neural network training. This paper further proposes a scale
learning-based anomaly detection method. Supervised by the learning objective
of scale distribution alignment, our approach learns the ranking of
representations converted from varied subspaces of each data instance. Through
this proxy task, our approach models inherent regularities and patterns within
data, which well describes data "normality". Abnormal degrees of testing
instances are obtained by measuring whether they fit these learned patterns.
Extensive experiments show that our approach leads to significant improvement
over state-of-the-art generative/contrastive anomaly detection methods. |
---|---|
DOI: | 10.48550/arxiv.2305.16114 |