GEMEL: Model Merging for Memory-Efficient, Real-Time Video Analytics at the Edge
Video analytics pipelines have steadily shifted to edge deployments to reduce bandwidth overheads and privacy violations, but in doing so, face an ever-growing resource tension. Most notably, edge-box GPUs lack the memory needed to concurrently house the growing number of (increasingly complex) mode...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Video analytics pipelines have steadily shifted to edge deployments to reduce
bandwidth overheads and privacy violations, but in doing so, face an
ever-growing resource tension. Most notably, edge-box GPUs lack the memory
needed to concurrently house the growing number of (increasingly complex)
models for real-time inference. Unfortunately, existing solutions that rely on
time/space sharing of GPU resources are insufficient as the required swapping
delays result in unacceptable frame drops and accuracy violations. We present
model merging, a new memory management technique that exploits architectural
similarities between edge vision models by judiciously sharing their layers
(including weights) to reduce workload memory costs and swapping delays. Our
system, GEMEL, efficiently integrates merging into existing pipelines by (1)
leveraging several guiding observations about per-model memory usage and
inter-layer dependencies to quickly identify fruitful and accuracy-preserving
merging configurations, and (2) altering edge inference schedules to maximize
merging benefits. Experiments across diverse workloads reveal that GEMEL
reduces memory usage by up to 60.7%, and improves overall accuracy by 8-39%
relative to time/space sharing alone. |
---|---|
DOI: | 10.48550/arxiv.2201.07705 |