LEA-Net: Layer-wise External Attention Network for Efficient Color Anomaly Detection
The utilization of prior knowledge about anomalies is an essential issue for anomaly detections. Recently, the visual attention mechanism has become a promising way to improve the performance of CNNs for some computer vision tasks. In this paper, we propose a novel model called Layer-wise External A...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The utilization of prior knowledge about anomalies is an essential issue for
anomaly detections. Recently, the visual attention mechanism has become a
promising way to improve the performance of CNNs for some computer vision
tasks. In this paper, we propose a novel model called Layer-wise External
Attention Network (LEA-Net) for efficient image anomaly detection. The core
idea relies on the integration of unsupervised and supervised anomaly detectors
via the visual attention mechanism. Our strategy is as follows: (i) Prior
knowledge about anomalies is represented as the anomaly map generated by
unsupervised learning of normal instances, (ii) The anomaly map is translated
to an attention map by the external network, (iii) The attention map is then
incorporated into intermediate layers of the anomaly detection network.
Notably, this layer-wise external attention can be applied to any CNN model in
an end-to-end training manner. For a pilot study, we validate LEA-Net on color
anomaly detection tasks. Through extensive experiments on PlantVillage, MVTec
AD, and Cloud datasets, we demonstrate that the proposed layer-wise visual
attention mechanism consistently boosts anomaly detection performances of an
existing CNN model, even on imbalanced datasets. Moreover, we show that our
attention mechanism successfully boosts the performance of several CNN models. |
---|---|
DOI: | 10.48550/arxiv.2109.05493 |