Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias

Equilibrium Propagation (EP) is a biologically-inspired algorithm for convergent RNNs with a local learning rule that comes with strong theoretical guarantees. The parameter updates of the neural network during the credit assignment phase have been shown mathematically to approach the gradients prov...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Laborieux, Axel, Ernoult, Maxence, Scellier, Benjamin, Bengio, Yoshua, Grollier, Julie, Querlioz, Damien
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Laborieux, Axel
Ernoult, Maxence
Scellier, Benjamin
Bengio, Yoshua
Grollier, Julie
Querlioz, Damien
description Equilibrium Propagation (EP) is a biologically-inspired algorithm for convergent RNNs with a local learning rule that comes with strong theoretical guarantees. The parameter updates of the neural network during the credit assignment phase have been shown mathematically to approach the gradients provided by Backpropagation Through Time (BPTT) when the network is infinitesimally nudged toward its target. In practice, however, training a network with the gradient estimates provided by EP does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon and that cancelling it allows training deep ConvNets by EP. We show that this bias can be greatly reduced by using symmetric nudging (a positive nudging and a negative one). We also generalize previous EP equations to the case of cross-entropy loss (by opposition to squared error). As a result of these advances, we are able to achieve a test error of 11.7% on CIFAR-10 by EP, which approaches the one achieved by BPTT and provides a major improvement with respect to the standard EP approach with same-sign nudging that gives 86% test error. We also apply these techniques to train an architecture with asymmetric forward and backward connections, yielding a 13.2% test error. These results highlight EP as a compelling biologically-plausible approach to compute error gradients in deep neural networks.
doi_str_mv 10.48550/arxiv.2006.03824
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2006_03824</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2006_03824</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2006_038243</originalsourceid><addsrcrecordid>eNqFjr0OgjAUhbs4GPUBnLwvIFZ-DLOAOhmj7uQCldwEWiyFyNtbiLvTGb5zTj7G1nvu-GEQ8B3qD_WOy_nB4V7o-nNWPHKsSJaQvDuqKNPU1XDTqsESDSkJRkEsRAORkv1VmBayAWKNrSE7rAa4i6LLxwOy7KyxICENJJbXaJSGI2G7ZLMXVq1Y_XLBNqfkGV22k0_aaNvVQzp6pZOX97_xBZlUQ_4</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias</title><source>arXiv.org</source><creator>Laborieux, Axel ; Ernoult, Maxence ; Scellier, Benjamin ; Bengio, Yoshua ; Grollier, Julie ; Querlioz, Damien</creator><creatorcontrib>Laborieux, Axel ; Ernoult, Maxence ; Scellier, Benjamin ; Bengio, Yoshua ; Grollier, Julie ; Querlioz, Damien</creatorcontrib><description>Equilibrium Propagation (EP) is a biologically-inspired algorithm for convergent RNNs with a local learning rule that comes with strong theoretical guarantees. The parameter updates of the neural network during the credit assignment phase have been shown mathematically to approach the gradients provided by Backpropagation Through Time (BPTT) when the network is infinitesimally nudged toward its target. In practice, however, training a network with the gradient estimates provided by EP does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon and that cancelling it allows training deep ConvNets by EP. We show that this bias can be greatly reduced by using symmetric nudging (a positive nudging and a negative one). We also generalize previous EP equations to the case of cross-entropy loss (by opposition to squared error). As a result of these advances, we are able to achieve a test error of 11.7% on CIFAR-10 by EP, which approaches the one achieved by BPTT and provides a major improvement with respect to the standard EP approach with same-sign nudging that gives 86% test error. We also apply these techniques to train an architecture with asymmetric forward and backward connections, yielding a 13.2% test error. These results highlight EP as a compelling biologically-plausible approach to compute error gradients in deep neural networks.</description><identifier>DOI: 10.48550/arxiv.2006.03824</identifier><language>eng</language><subject>Computer Science - Neural and Evolutionary Computing</subject><creationdate>2020-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2006.03824$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2006.03824$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Laborieux, Axel</creatorcontrib><creatorcontrib>Ernoult, Maxence</creatorcontrib><creatorcontrib>Scellier, Benjamin</creatorcontrib><creatorcontrib>Bengio, Yoshua</creatorcontrib><creatorcontrib>Grollier, Julie</creatorcontrib><creatorcontrib>Querlioz, Damien</creatorcontrib><title>Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias</title><description>Equilibrium Propagation (EP) is a biologically-inspired algorithm for convergent RNNs with a local learning rule that comes with strong theoretical guarantees. The parameter updates of the neural network during the credit assignment phase have been shown mathematically to approach the gradients provided by Backpropagation Through Time (BPTT) when the network is infinitesimally nudged toward its target. In practice, however, training a network with the gradient estimates provided by EP does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon and that cancelling it allows training deep ConvNets by EP. We show that this bias can be greatly reduced by using symmetric nudging (a positive nudging and a negative one). We also generalize previous EP equations to the case of cross-entropy loss (by opposition to squared error). As a result of these advances, we are able to achieve a test error of 11.7% on CIFAR-10 by EP, which approaches the one achieved by BPTT and provides a major improvement with respect to the standard EP approach with same-sign nudging that gives 86% test error. We also apply these techniques to train an architecture with asymmetric forward and backward connections, yielding a 13.2% test error. These results highlight EP as a compelling biologically-plausible approach to compute error gradients in deep neural networks.</description><subject>Computer Science - Neural and Evolutionary Computing</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjr0OgjAUhbs4GPUBnLwvIFZ-DLOAOhmj7uQCldwEWiyFyNtbiLvTGb5zTj7G1nvu-GEQ8B3qD_WOy_nB4V7o-nNWPHKsSJaQvDuqKNPU1XDTqsESDSkJRkEsRAORkv1VmBayAWKNrSE7rAa4i6LLxwOy7KyxICENJJbXaJSGI2G7ZLMXVq1Y_XLBNqfkGV22k0_aaNvVQzp6pZOX97_xBZlUQ_4</recordid><startdate>20200606</startdate><enddate>20200606</enddate><creator>Laborieux, Axel</creator><creator>Ernoult, Maxence</creator><creator>Scellier, Benjamin</creator><creator>Bengio, Yoshua</creator><creator>Grollier, Julie</creator><creator>Querlioz, Damien</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200606</creationdate><title>Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias</title><author>Laborieux, Axel ; Ernoult, Maxence ; Scellier, Benjamin ; Bengio, Yoshua ; Grollier, Julie ; Querlioz, Damien</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2006_038243</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Neural and Evolutionary Computing</topic><toplevel>online_resources</toplevel><creatorcontrib>Laborieux, Axel</creatorcontrib><creatorcontrib>Ernoult, Maxence</creatorcontrib><creatorcontrib>Scellier, Benjamin</creatorcontrib><creatorcontrib>Bengio, Yoshua</creatorcontrib><creatorcontrib>Grollier, Julie</creatorcontrib><creatorcontrib>Querlioz, Damien</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Laborieux, Axel</au><au>Ernoult, Maxence</au><au>Scellier, Benjamin</au><au>Bengio, Yoshua</au><au>Grollier, Julie</au><au>Querlioz, Damien</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias</atitle><date>2020-06-06</date><risdate>2020</risdate><abstract>Equilibrium Propagation (EP) is a biologically-inspired algorithm for convergent RNNs with a local learning rule that comes with strong theoretical guarantees. The parameter updates of the neural network during the credit assignment phase have been shown mathematically to approach the gradients provided by Backpropagation Through Time (BPTT) when the network is infinitesimally nudged toward its target. In practice, however, training a network with the gradient estimates provided by EP does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon and that cancelling it allows training deep ConvNets by EP. We show that this bias can be greatly reduced by using symmetric nudging (a positive nudging and a negative one). We also generalize previous EP equations to the case of cross-entropy loss (by opposition to squared error). As a result of these advances, we are able to achieve a test error of 11.7% on CIFAR-10 by EP, which approaches the one achieved by BPTT and provides a major improvement with respect to the standard EP approach with same-sign nudging that gives 86% test error. We also apply these techniques to train an architecture with asymmetric forward and backward connections, yielding a 13.2% test error. These results highlight EP as a compelling biologically-plausible approach to compute error gradients in deep neural networks.</abstract><doi>10.48550/arxiv.2006.03824</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2006.03824
ispartof
issn
language eng
recordid cdi_arxiv_primary_2006_03824
source arXiv.org
subjects Computer Science - Neural and Evolutionary Computing
title Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T19%3A13%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Scaling%20Equilibrium%20Propagation%20to%20Deep%20ConvNets%20by%20Drastically%20Reducing%20its%20Gradient%20Estimator%20Bias&rft.au=Laborieux,%20Axel&rft.date=2020-06-06&rft_id=info:doi/10.48550/arxiv.2006.03824&rft_dat=%3Carxiv_GOX%3E2006_03824%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true