Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias
Equilibrium Propagation (EP) is a biologically-inspired counterpart of Backpropagation Through Time (BPTT) which, owing to its strong theoretical guarantees and the locality in space of its learning rule, fosters the design of energy-efficient hardware dedicated to learning. In practice, however, EP...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Equilibrium Propagation (EP) is a biologically-inspired counterpart of
Backpropagation Through Time (BPTT) which, owing to its strong theoretical
guarantees and the locality in space of its learning rule, fosters the design
of energy-efficient hardware dedicated to learning. In practice, however, EP
does not scale to visual tasks harder than MNIST. In this work, we show that a
bias in the gradient estimate of EP, inherent in the use of finite nudging, is
responsible for this phenomenon and that cancelling it allows training deep
ConvNets by EP, including architectures with distinct forward and backward
connections. These results highlight EP as a scalable approach to compute error
gradients in deep neural networks, thereby motivating its hardware
implementation. |
---|---|
DOI: | 10.48550/arxiv.2101.05536 |