Neural network accelerator data reuse architecture based on instruction control

The invention discloses a neural network accelerator data reuse architecture based on instruction control, which realizes processing of input block data according to a channel priority sequence through instruction analysis, decoding and multi-stage emission and control, can perform on-chip multiplex...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: YANG LIANG, HUANG JIN, BI SIYING, LOU MIAN, JIAO FENG
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The invention discloses a neural network accelerator data reuse architecture based on instruction control, which realizes processing of input block data according to a channel priority sequence through instruction analysis, decoding and multi-stage emission and control, can perform on-chip multiplexing on parts output by adjacent data blocks and data, adopts an instruction transmission mode, and can perform on-chip multiplexing on the parts output by the adjacent data blocks. Operator configuration information is provided, an input cache region can generate a memory access address of a current on-chip data block according to the memory access address, judgment processing of an operator part and data coverage cache, accumulative cache and result output is completed according to instruction control information, and result information of data partitioning is cached on a chip. And multiple multiplexing of adjacent data block parts and data in channel accumulation is realized, and output is completed after block d