Crowd Behavior Analysis Using Local Mid-Level Visual Descriptors

Crowd behavior analysis has recently emerged as an increasingly important and dedicated problem for crowd monitoring and management in the visual surveillance community. In particular, it is receiving a lot of attention to detect potentially dangerous situations and to prevent overcrowdedness. In th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2017-03, Vol.27 (3), p.589-602
Hauptverfasser: Fradi, Hajer, Luvison, Bertrand, Pham, Quoc Cuong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Crowd behavior analysis has recently emerged as an increasingly important and dedicated problem for crowd monitoring and management in the visual surveillance community. In particular, it is receiving a lot of attention to detect potentially dangerous situations and to prevent overcrowdedness. In this paper, we propose to quantify crowd properties by a rich set of visual descriptors. The calculation of these descriptors is realized through a novel spatio-temporal model of the crowd. It consists of modeling time-varying dynamics of the crowd using local feature tracks. It also involves a Delaunay triangulation to approximate neighborhood interactions. In total, the crowd is represented as an evolving graph, where the nodes correspond to the tracklets. From this graph, various mid-level representations are extracted to determine the ongoing crowd behaviors. In particular, the effectiveness of the proposed visual descriptors is demonstrated within three applications: crowd video classification, anomaly detection, and violence detection in crowds. The obtained results on videos from different data sets prove the relevance of these visual descriptors to crowd behavior analysis. In addition, by means of comparisons to other existing methods, we demonstrate that the proposed descriptors outperform the state-of-the-art methods with a significant margin using the most challenging data sets.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2016.2615443