COMBINING COMPRESSION, PARTITIONING AND QUANTIZATION OF DL MODELS FOR FITMENT IN HARDWARE PROCESSORS

Small and compact Deep Learning models are required for embedded AI in several domains. In many industrial use-cases, there are requirements to transform already trained models to ensemble embedded systems or re-train those for a given deployment scenario, with limited data for transfer learning. Mo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Dey, Swarnava, Swain, Amit, Kulkarni, Gitesh, Bhaumik, Chirabrata, Mukherjee, Arijit, Tyagi, Aakash, Pal, Arpan, Ukil, Arijit, Mondal, Jayeeta, Sahu, Ishan
Format: Patent
Sprache:eng ; fre ; ger
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Small and compact Deep Learning models are required for embedded AI in several domains. In many industrial use-cases, there are requirements to transform already trained models to ensemble embedded systems or re-train those for a given deployment scenario, with limited data for transfer learning. Moreover, the hardware platforms used in embedded application include FPGAs, AI hardware accelerators, System-on-Chips and on-premises computing elements (Fog / Network Edge). These are interconnected through heterogenous bus / network with different capacities. Method of the present disclosure finds how to automatically partition a given DNN into ensemble devices, considering the effect of accuracy - latency power - tradeoff, due to intermediate compression and effect of quantization due to conversion to AI accelerator SDKs. Method of the present disclosure is an iterative approach to obtain a set of partitions by repeatedly refining the partitions and generating a cascaded model for inference and training on ensemble hardware.