High perforamance machine learning inference framework for edge devices

Techniques for high-performance machine learning (ML) inference in heterogenous edge devices are described. A ML model trained using a variety of different frameworks is translated into a common format that is runnable by inferences engines of edge devices. The translated model is optimized in hardw...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Chen, Gang, Calleja, Eduardo Manuel, Gao, Long
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Techniques for high-performance machine learning (ML) inference in heterogenous edge devices are described. A ML model trained using a variety of different frameworks is translated into a common format that is runnable by inferences engines of edge devices. The translated model is optimized in hardware-agnostic and/or hardware-specific ways to improve inference performance, and the optimized model is sent to the edge devices. The inference engine for any edge device can be accessed by a customer application using a same defined API, regardless of the hardware characteristics of the edge device or the original format of the ML model.