Fixed-Weight Difference Target Propagation

Target Propagation (TP) is a biologically more plausible algorithm than the error backpropagation (BP) to train deep networks, and improving practicality of TP is an open issue. TP methods require the feedforward and feedback networks to form layer-wise autoencoders for propagating the target values...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Shibuya, Tatsukichi, Inoue, Nakamasa, Kawakami, Rei, Sato, Ikuro
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Shibuya, Tatsukichi
Inoue, Nakamasa
Kawakami, Rei
Sato, Ikuro
description Target Propagation (TP) is a biologically more plausible algorithm than the error backpropagation (BP) to train deep networks, and improving practicality of TP is an open issue. TP methods require the feedforward and feedback networks to form layer-wise autoencoders for propagating the target values generated at the output layer. However, this causes certain drawbacks; e.g., careful hyperparameter tuning is required to synchronize the feedforward and feedback training, and frequent updates of the feedback path are usually required than that of the feedforward path. Learning of the feedforward and feedback networks is sufficient to make TP methods capable of training, but is having these layer-wise autoencoders a necessary condition for TP to work? We answer this question by presenting Fixed-Weight Difference Target Propagation (FW-DTP) that keeps the feedback weights constant during training. We confirmed that this simple method, which naturally resolves the abovementioned problems of TP, can still deliver informative target values to hidden layers for a given task; indeed, FW-DTP consistently achieves higher test performance than a baseline, the Difference Target Propagation (DTP), on four classification datasets. We also present a novel propagation architecture that explains the exact form of the feedback function of DTP to analyze FW-DTP.
doi_str_mv 10.48550/arxiv.2212.10352
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2212_10352</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2212_10352</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-5d08a01dc57141aefd4164b6e861c1cfb8602cda2ffb9ab8bf32a6a023e366ca3</originalsourceid><addsrcrecordid>eNotzsFqAjEQgOFceijWB-jJPQu7ZiabGI9FaysIeljwuEySyRqwKnEp-vZS6-m__XxCvIOsaqu1nFC-pt8KEbACqTS-ivEyXTmUO07dvi8WKUbOfPRcNJQ77ottPp2poz6djm_iJdLhwsNnB6JZfjbz73K9-VrNP9YlmSmWOkhLEoLXU6iBOIYaTO0MWwMefHTWSPSBMEY3I2ddVEiGJCpWxnhSAzH63z6w7TmnH8q39g_dPtDqDnFqPCA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Fixed-Weight Difference Target Propagation</title><source>arXiv.org</source><creator>Shibuya, Tatsukichi ; Inoue, Nakamasa ; Kawakami, Rei ; Sato, Ikuro</creator><creatorcontrib>Shibuya, Tatsukichi ; Inoue, Nakamasa ; Kawakami, Rei ; Sato, Ikuro</creatorcontrib><description>Target Propagation (TP) is a biologically more plausible algorithm than the error backpropagation (BP) to train deep networks, and improving practicality of TP is an open issue. TP methods require the feedforward and feedback networks to form layer-wise autoencoders for propagating the target values generated at the output layer. However, this causes certain drawbacks; e.g., careful hyperparameter tuning is required to synchronize the feedforward and feedback training, and frequent updates of the feedback path are usually required than that of the feedforward path. Learning of the feedforward and feedback networks is sufficient to make TP methods capable of training, but is having these layer-wise autoencoders a necessary condition for TP to work? We answer this question by presenting Fixed-Weight Difference Target Propagation (FW-DTP) that keeps the feedback weights constant during training. We confirmed that this simple method, which naturally resolves the abovementioned problems of TP, can still deliver informative target values to hidden layers for a given task; indeed, FW-DTP consistently achieves higher test performance than a baseline, the Difference Target Propagation (DTP), on four classification datasets. We also present a novel propagation architecture that explains the exact form of the feedback function of DTP to analyze FW-DTP.</description><identifier>DOI: 10.48550/arxiv.2212.10352</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Neural and Evolutionary Computing</subject><creationdate>2022-12</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2212.10352$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2212.10352$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shibuya, Tatsukichi</creatorcontrib><creatorcontrib>Inoue, Nakamasa</creatorcontrib><creatorcontrib>Kawakami, Rei</creatorcontrib><creatorcontrib>Sato, Ikuro</creatorcontrib><title>Fixed-Weight Difference Target Propagation</title><description>Target Propagation (TP) is a biologically more plausible algorithm than the error backpropagation (BP) to train deep networks, and improving practicality of TP is an open issue. TP methods require the feedforward and feedback networks to form layer-wise autoencoders for propagating the target values generated at the output layer. However, this causes certain drawbacks; e.g., careful hyperparameter tuning is required to synchronize the feedforward and feedback training, and frequent updates of the feedback path are usually required than that of the feedforward path. Learning of the feedforward and feedback networks is sufficient to make TP methods capable of training, but is having these layer-wise autoencoders a necessary condition for TP to work? We answer this question by presenting Fixed-Weight Difference Target Propagation (FW-DTP) that keeps the feedback weights constant during training. We confirmed that this simple method, which naturally resolves the abovementioned problems of TP, can still deliver informative target values to hidden layers for a given task; indeed, FW-DTP consistently achieves higher test performance than a baseline, the Difference Target Propagation (DTP), on four classification datasets. We also present a novel propagation architecture that explains the exact form of the feedback function of DTP to analyze FW-DTP.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Neural and Evolutionary Computing</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzsFqAjEQgOFceijWB-jJPQu7ZiabGI9FaysIeljwuEySyRqwKnEp-vZS6-m__XxCvIOsaqu1nFC-pt8KEbACqTS-ivEyXTmUO07dvi8WKUbOfPRcNJQ77ottPp2poz6djm_iJdLhwsNnB6JZfjbz73K9-VrNP9YlmSmWOkhLEoLXU6iBOIYaTO0MWwMefHTWSPSBMEY3I2ddVEiGJCpWxnhSAzH63z6w7TmnH8q39g_dPtDqDnFqPCA</recordid><startdate>20221219</startdate><enddate>20221219</enddate><creator>Shibuya, Tatsukichi</creator><creator>Inoue, Nakamasa</creator><creator>Kawakami, Rei</creator><creator>Sato, Ikuro</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20221219</creationdate><title>Fixed-Weight Difference Target Propagation</title><author>Shibuya, Tatsukichi ; Inoue, Nakamasa ; Kawakami, Rei ; Sato, Ikuro</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-5d08a01dc57141aefd4164b6e861c1cfb8602cda2ffb9ab8bf32a6a023e366ca3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Neural and Evolutionary Computing</topic><toplevel>online_resources</toplevel><creatorcontrib>Shibuya, Tatsukichi</creatorcontrib><creatorcontrib>Inoue, Nakamasa</creatorcontrib><creatorcontrib>Kawakami, Rei</creatorcontrib><creatorcontrib>Sato, Ikuro</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shibuya, Tatsukichi</au><au>Inoue, Nakamasa</au><au>Kawakami, Rei</au><au>Sato, Ikuro</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Fixed-Weight Difference Target Propagation</atitle><date>2022-12-19</date><risdate>2022</risdate><abstract>Target Propagation (TP) is a biologically more plausible algorithm than the error backpropagation (BP) to train deep networks, and improving practicality of TP is an open issue. TP methods require the feedforward and feedback networks to form layer-wise autoencoders for propagating the target values generated at the output layer. However, this causes certain drawbacks; e.g., careful hyperparameter tuning is required to synchronize the feedforward and feedback training, and frequent updates of the feedback path are usually required than that of the feedforward path. Learning of the feedforward and feedback networks is sufficient to make TP methods capable of training, but is having these layer-wise autoencoders a necessary condition for TP to work? We answer this question by presenting Fixed-Weight Difference Target Propagation (FW-DTP) that keeps the feedback weights constant during training. We confirmed that this simple method, which naturally resolves the abovementioned problems of TP, can still deliver informative target values to hidden layers for a given task; indeed, FW-DTP consistently achieves higher test performance than a baseline, the Difference Target Propagation (DTP), on four classification datasets. We also present a novel propagation architecture that explains the exact form of the feedback function of DTP to analyze FW-DTP.</abstract><doi>10.48550/arxiv.2212.10352</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2212.10352
ispartof
issn
language eng
recordid cdi_arxiv_primary_2212_10352
source arXiv.org
subjects Computer Science - Learning
Computer Science - Neural and Evolutionary Computing
title Fixed-Weight Difference Target Propagation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-19T03%3A40%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Fixed-Weight%20Difference%20Target%20Propagation&rft.au=Shibuya,%20Tatsukichi&rft.date=2022-12-19&rft_id=info:doi/10.48550/arxiv.2212.10352&rft_dat=%3Carxiv_GOX%3E2212_10352%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true