Method for optimizing on-device neural network model by using sub-kernel searching module and device using the same
A method for optimizing an on-device neural network model by using a Sub-kernel Searching Module is provided. The method includes steps of a learning device (a) if a Big Neural Network Model having a capacity capable of performing a targeted task by using a maximal computing power of an edge device...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A method for optimizing an on-device neural network model by using a Sub-kernel Searching Module is provided. The method includes steps of a learning device (a) if a Big Neural Network Model having a capacity capable of performing a targeted task by using a maximal computing power of an edge device has been trained to generate a first inference result on an input data, allowing the Sub-kernel Searching Module to identify constraint and a state vector corresponding to the training data, to generate architecture information on a specific sub-kernel suitable for performing the targeted task on the training data, (b) optimizing the Big Neural Network Model according to the architecture information to generate a specific Small Neural Network Model for generating a second inference result on the training data, and (c) training the Sub-kernel Searching Module by using the first and the second inference result. |
---|