Complexity Evaluation of Parallel Execution of the RAPiD Deep-Learning Algorithm on Intel CPU

Knowing how many and where are people in various indoor spaces is critical for reducing HVAC energy waste, space management, spatial analytics and in emergency scenarios. While a range of technologies have been proposed to detect and track people in large indoor spaces, ceiling-mounted fisheye camer...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Konrad, Dominic, Duan, Zhihao, Cokbas, Mertcan, Ishwar, Prakash
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Knowing how many and where are people in various indoor spaces is critical for reducing HVAC energy waste, space management, spatial analytics and in emergency scenarios. While a range of technologies have been proposed to detect and track people in large indoor spaces, ceiling-mounted fisheye cameras have recently emerged as strong contenders. Currently, RAPiD is the SOTA algorithm for people detection in images captured by fisheye cameras. However, in large spaces several overhead fisheye cameras are needed to assure high accuracy of counting and thus multiple instances of RAPiD must be executed simultaneously. This report evaluates inference time when multiple instances of RAPiD run in parallel on an Ubuntu NUC PC with Intel I7 8559U CPU. We consider three mechanisms of CPU-resource allocation to handle multiple instances of RAPiD: 1) managed by Ubuntu, 2) managed by user via operating-system calls to assign logical cores, and 3) managed by user via PyTorch-library calls to limit the number of threads used by PyTorch. Each scenario was evaluated on 300 images. The experimental results show, that when one or two instances of RAPiD are executed in parallel all three approaches result in similar inference times of 1.8sec and 3.2sec, respectively. However, when three or more instances of RAPiD run in parallel, limiting the number of threads used by PyTorch results in the shortest inference times. On average, RAPiD completes inference of 2 images simultaneously in about 3sec, 4 images in 6sec and 8 images in less than 14sec. This is important for real-time system design. In HVAC-application scenarios, with a typical reaction time of 10-15min, a latency of 14sec is negligible so a single 8559U CPU can support 8 camera streams thus reducing the system cost. However, in emergency scenarios, when time is of essence, a single CPU may be needed for each camera to reduce the latency to 1.8sec.
DOI:10.48550/arxiv.2312.06544