FusedInf: Efficient Swapping of DNN Models for On-Demand Serverless Inference Services on the Edge
Edge AI computing boxes are a new class of computing devices that are aimed to revolutionize the AI industry. These compact and robust hardware units bring the power of AI processing directly to the source of data--on the edge of the network. On the other hand, on-demand serverless inference service...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Edge AI computing boxes are a new class of computing devices that are aimed
to revolutionize the AI industry. These compact and robust hardware units bring
the power of AI processing directly to the source of data--on the edge of the
network. On the other hand, on-demand serverless inference services are
becoming more and more popular as they minimize the infrastructural cost
associated with hosting and running DNN models for small to medium-sized
businesses. However, these computing devices are still constrained in terms of
resource availability. As such, the service providers need to load and unload
models efficiently in order to meet the growing demand. In this paper, we
introduce FusedInf to efficiently swap DNN models for on-demand serverless
inference services on the edge. FusedInf combines multiple models into a single
Direct Acyclic Graph (DAG) to efficiently load the models into the GPU memory
and make execution faster. Our evaluation of popular DNN models showed that
creating a single DAG can make the execution of the models up to 14\% faster
while reducing the memory requirement by up to 17\%. The prototype
implementation is available at https://github.com/SifatTaj/FusedInf. |
---|---|
DOI: | 10.48550/arxiv.2410.21120 |