Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias

Equilibrium Propagation (EP) is a biologically-inspired counterpart of Backpropagation Through Time (BPTT) which, owing to its strong theoretical guarantees and the locality in space of its learning rule, fosters the design of energy-efficient hardware dedicated to learning. In practice, however, EP...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Laborieux, Axel, Ernoult, Maxence, Scellier, Benjamin, Bengio, Yoshua, Grollier, Julie, Querlioz, Damien
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Laborieux, Axel
Ernoult, Maxence
Scellier, Benjamin
Bengio, Yoshua
Grollier, Julie
Querlioz, Damien
description Equilibrium Propagation (EP) is a biologically-inspired counterpart of Backpropagation Through Time (BPTT) which, owing to its strong theoretical guarantees and the locality in space of its learning rule, fosters the design of energy-efficient hardware dedicated to learning. In practice, however, EP does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon and that cancelling it allows training deep ConvNets by EP, including architectures with distinct forward and backward connections. These results highlight EP as a scalable approach to compute error gradients in deep neural networks, thereby motivating its hardware implementation.
doi_str_mv 10.48550/arxiv.2101.05536
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2101_05536</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2101_05536</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-2c7ff85f2e0162979e90a32e3362637d3d9d816382ef2871269dd4206338a6563</originalsourceid><addsrcrecordid>eNotj8FOwzAQRH3hgAofwAn_QILtrTfOEdJQkCpA0Hu0jZ3KUpoEx6no35MWTnMYzdM8xu6kSJdGa_FA4ccfUyWFTIXWgNfMftXU-m7Py-_Jt34X_HTgH6EfaE_R9x2PPV85N_Ci745vLo58d-KrQGP087A98U9np_oM8HO3DmS96yIv5_5AsQ_8ydN4w64aakd3-58Ltn0ut8VLsnlfvxaPm4Qww0TVWdMY3SgnJKo8y10uCJQDQIWQWbC5NRLBKNcok0mFubVLJRDAEGqEBbv_w140qyHMF8KpOutWF134BR7FT5U</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias</title><source>arXiv.org</source><creator>Laborieux, Axel ; Ernoult, Maxence ; Scellier, Benjamin ; Bengio, Yoshua ; Grollier, Julie ; Querlioz, Damien</creator><creatorcontrib>Laborieux, Axel ; Ernoult, Maxence ; Scellier, Benjamin ; Bengio, Yoshua ; Grollier, Julie ; Querlioz, Damien</creatorcontrib><description>Equilibrium Propagation (EP) is a biologically-inspired counterpart of Backpropagation Through Time (BPTT) which, owing to its strong theoretical guarantees and the locality in space of its learning rule, fosters the design of energy-efficient hardware dedicated to learning. In practice, however, EP does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon and that cancelling it allows training deep ConvNets by EP, including architectures with distinct forward and backward connections. These results highlight EP as a scalable approach to compute error gradients in deep neural networks, thereby motivating its hardware implementation.</description><identifier>DOI: 10.48550/arxiv.2101.05536</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Neural and Evolutionary Computing</subject><creationdate>2021-01</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2101.05536$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2101.05536$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Laborieux, Axel</creatorcontrib><creatorcontrib>Ernoult, Maxence</creatorcontrib><creatorcontrib>Scellier, Benjamin</creatorcontrib><creatorcontrib>Bengio, Yoshua</creatorcontrib><creatorcontrib>Grollier, Julie</creatorcontrib><creatorcontrib>Querlioz, Damien</creatorcontrib><title>Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias</title><description>Equilibrium Propagation (EP) is a biologically-inspired counterpart of Backpropagation Through Time (BPTT) which, owing to its strong theoretical guarantees and the locality in space of its learning rule, fosters the design of energy-efficient hardware dedicated to learning. In practice, however, EP does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon and that cancelling it allows training deep ConvNets by EP, including architectures with distinct forward and backward connections. These results highlight EP as a scalable approach to compute error gradients in deep neural networks, thereby motivating its hardware implementation.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Neural and Evolutionary Computing</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FOwzAQRH3hgAofwAn_QILtrTfOEdJQkCpA0Hu0jZ3KUpoEx6no35MWTnMYzdM8xu6kSJdGa_FA4ccfUyWFTIXWgNfMftXU-m7Py-_Jt34X_HTgH6EfaE_R9x2PPV85N_Ci745vLo58d-KrQGP087A98U9np_oM8HO3DmS96yIv5_5AsQ_8ydN4w64aakd3-58Ltn0ut8VLsnlfvxaPm4Qww0TVWdMY3SgnJKo8y10uCJQDQIWQWbC5NRLBKNcok0mFubVLJRDAEGqEBbv_w140qyHMF8KpOutWF134BR7FT5U</recordid><startdate>20210114</startdate><enddate>20210114</enddate><creator>Laborieux, Axel</creator><creator>Ernoult, Maxence</creator><creator>Scellier, Benjamin</creator><creator>Bengio, Yoshua</creator><creator>Grollier, Julie</creator><creator>Querlioz, Damien</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210114</creationdate><title>Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias</title><author>Laborieux, Axel ; Ernoult, Maxence ; Scellier, Benjamin ; Bengio, Yoshua ; Grollier, Julie ; Querlioz, Damien</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-2c7ff85f2e0162979e90a32e3362637d3d9d816382ef2871269dd4206338a6563</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Neural and Evolutionary Computing</topic><toplevel>online_resources</toplevel><creatorcontrib>Laborieux, Axel</creatorcontrib><creatorcontrib>Ernoult, Maxence</creatorcontrib><creatorcontrib>Scellier, Benjamin</creatorcontrib><creatorcontrib>Bengio, Yoshua</creatorcontrib><creatorcontrib>Grollier, Julie</creatorcontrib><creatorcontrib>Querlioz, Damien</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Laborieux, Axel</au><au>Ernoult, Maxence</au><au>Scellier, Benjamin</au><au>Bengio, Yoshua</au><au>Grollier, Julie</au><au>Querlioz, Damien</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias</atitle><date>2021-01-14</date><risdate>2021</risdate><abstract>Equilibrium Propagation (EP) is a biologically-inspired counterpart of Backpropagation Through Time (BPTT) which, owing to its strong theoretical guarantees and the locality in space of its learning rule, fosters the design of energy-efficient hardware dedicated to learning. In practice, however, EP does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon and that cancelling it allows training deep ConvNets by EP, including architectures with distinct forward and backward connections. These results highlight EP as a scalable approach to compute error gradients in deep neural networks, thereby motivating its hardware implementation.</abstract><doi>10.48550/arxiv.2101.05536</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2101.05536
ispartof
issn
language eng
recordid cdi_arxiv_primary_2101_05536
source arXiv.org
subjects Computer Science - Learning
Computer Science - Neural and Evolutionary Computing
title Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T23%3A47%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Scaling%20Equilibrium%20Propagation%20to%20Deep%20ConvNets%20by%20Drastically%20Reducing%20its%20Gradient%20Estimator%20Bias&rft.au=Laborieux,%20Axel&rft.date=2021-01-14&rft_id=info:doi/10.48550/arxiv.2101.05536&rft_dat=%3Carxiv_GOX%3E2101_05536%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true