DA-Flow: Dual Attention Normalizing Flow for Skeleton-based Video Anomaly Detection
Cooperation between temporal convolutional networks (TCN) and graph convolutional networks (GCN) as a processing module has shown promising results in skeleton-based video anomaly detection (SVAD). However, to maintain a lightweight model with low computational and storage complexity, shallow GCN an...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Cooperation between temporal convolutional networks (TCN) and graph
convolutional networks (GCN) as a processing module has shown promising results
in skeleton-based video anomaly detection (SVAD). However, to maintain a
lightweight model with low computational and storage complexity, shallow GCN
and TCN blocks are constrained by small receptive fields and a lack of
cross-dimension interaction capture. To tackle this limitation, we propose a
lightweight module called the Dual Attention Module (DAM) for capturing
cross-dimension interaction relationships in spatio-temporal skeletal data. It
employs the frame attention mechanism to identify the most significant frames
and the skeleton attention mechanism to capture broader relationships across
fixed partitions with minimal parameters and flops. Furthermore, the proposed
Dual Attention Normalizing Flow (DA-Flow) integrates the DAM as a
post-processing unit after GCN within the normalizing flow framework.
Simulations show that the proposed model is robust against noise and negative
samples. Experimental results show that DA-Flow reaches competitive or better
performance than the existing state-of-the-art (SOTA) methods in terms of the
micro AUC metric with the fewest number of parameters. Moreover, we found that
even without training, simply using random projection without dimensionality
reduction on skeleton data enables substantial anomaly detection capabilities. |
---|---|
DOI: | 10.48550/arxiv.2406.02976 |