Area-Efficient Convolutional Block

Hardware accelerator designs for neural networks are improved with various approaches to reduce circuit area, improve power consumption, and reduce starvation. Convolutional layers of a neural network may multiply a set of weights with a set of inputs. One example defers two's complement arithm...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Toma-Ii, Vasile, Boyd, Richard, Biro, Zsolt, Puglia, Luca
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Hardware accelerator designs for neural networks are improved with various approaches to reduce circuit area, improve power consumption, and reduce starvation. Convolutional layers of a neural network may multiply a set of weights with a set of inputs. One example defers two's complement arithmetic from the parallelized multiplication circuits and completes the two's complement arithmetic when the results are accumulated. In another example, a multiplication circuit initially multiplies an input by an initial value of the maximum (or minimum) multiplication range before applying the magnitude of a multiplication encoded relative to the multiplication range. In another example, after dimensional reduction earlier in the network hardware, circuitry for a convolutional layer uses a reduced number of convolutional block circuits that are reused across a plurality of clock cycles to apply different subsets of weight channels.