NEURAL NETWORK HARDWARE ACCELERATION VIA SEQUENTIALLY CONNECTED COMPUTATION MODULES
Neural network hardware acceleration is performed by an integrated circuit including sequentially connected computation modules. Each computation module includes a processor and an adder. The processor includes circuitry configured to receive an input data value and a weight value, and perform a mat...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Neural network hardware acceleration is performed by an integrated circuit including sequentially connected computation modules. Each computation module includes a processor and an adder. The processor includes circuitry configured to receive an input data value and a weight value, and perform a mathematical operation on the input data value and the weight value to produce a resultant data value. The adder includes circuitry configured to receive the resultant data value directly from the processor, receive one of a preceding resultant data value and a preceding sum value directly from a preceding adder of a preceding computation module, add the resultant data value to the one of the preceding resultant data value and the preceding sum value to produce a sum value, and transmit one of the resultant data value and the sum value to the memory or directly to a subsequent adder of a subsequent computation module. |
---|