Depth and edge auxiliary learning for still image crowd density estimation

Crowd counting plays a significant role in crowd monitoring and management, which suffers from various challenges, especially in crowd-scale variations and background interference issues. Therefore, we propose a method named depth and edge auxiliary learning for still image crowd density estimation...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Pattern analysis and applications : PAA 2021-11, Vol.24 (4), p.1777-1792
Hauptverfasser: Peng, Sifan, Yin, Baoqun, Hao, Xiaoliang, Yang, Qianqian, Kumar, Aakash, Wang, Luyang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Crowd counting plays a significant role in crowd monitoring and management, which suffers from various challenges, especially in crowd-scale variations and background interference issues. Therefore, we propose a method named depth and edge auxiliary learning for still image crowd density estimation to cope with crowd-scale variations and background interference problems simultaneously. The proposed multi-task framework contains three sub-tasks including the crowd head edge regression, the crowd density map regression and the relative depth map regression. The crowd head edge regression task outputs distinctive crowd head edge features to distinguish crowd from complex background. The relative depth map regression task perceives crowd-scale variations and outputs multi-scale crowd features. Moreover, we design an efficient fusion strategy to fuse the above information and make the crowd density map regression generate high-quality crowd density maps. Various experiments were conducted on four main-stream datasets to verify the effectiveness and portability of our method. Experimental results indicate that our method can achieve competitive performance compared with other superior approaches. In addition, our proposed method improves the counting accuracy of the baseline network by 15.6 % .
ISSN:1433-7541
1433-755X
DOI:10.1007/s10044-021-01017-4