Deflate‐inflate: Exploiting hashing trick for bringing inference to the edge with scalable convolutional neural networks

With each passing year, the compelling need to bring deep learning computational models to the edge grows, as does the disparity in resource demand between these models and Internet of Things edge devices. This article employs an old trick from the book "deflate and inflate" to bridge this...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Concurrency and computation 2022-02, Vol.34 (3), p.n/a
Hauptverfasser: Nazir, Azra, Naaz Mir, Roohie, Qureshi, Shaima
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:With each passing year, the compelling need to bring deep learning computational models to the edge grows, as does the disparity in resource demand between these models and Internet of Things edge devices. This article employs an old trick from the book "deflate and inflate" to bridge this gap. The proposed system uses the hashing trick to deflate the model. A uniform hash function and a neighborhood function are used to inflate the model at runtime. The neighborhood function approximates the original parameter space better than the uniform hash function according to experimental results. Compared to existing techniques for distributing the VGG‐16 model over the Fog‐Edge platform, our deployment strategy has a 1.7× ‐ 7.5× speedup with only 1–4 devices due to decreased memory access and better resource utilization.
ISSN:1532-0626
1532-0634
DOI:10.1002/cpe.6593