Distributed weight update for backpropagation of a neural network

Speed of training a neural network is improved by updating the weights of the neural network in parallel. In at least one embodiment, after back propagation, gradients are distributed to a plurality of processors, each of which calculate a portion of the updated weights of the neural network.

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Sharan Chetlur, Natalia Gimelshein, Simon Layton, Thor Mikal Johnsen
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Speed of training a neural network is improved by updating the weights of the neural network in parallel. In at least one embodiment, after back propagation, gradients are distributed to a plurality of processors, each of which calculate a portion of the updated weights of the neural network.