Backprop Evolution

The back-propagation algorithm is the cornerstone of deep learning. Despite its importance, few variations of the algorithm have been attempted. This work presents an approach to discover new variations of the back-propagation equation. We use a domain specific lan- guage to describe update equation...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Alber, Maximilian, Bello, Irwan, Zoph, Barret, Kindermans, Pieter-Jan, Ramachandran, Prajit, Le, Quoc
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Alber, Maximilian
Bello, Irwan
Zoph, Barret
Kindermans, Pieter-Jan
Ramachandran, Prajit
Le, Quoc
description The back-propagation algorithm is the cornerstone of deep learning. Despite its importance, few variations of the algorithm have been attempted. This work presents an approach to discover new variations of the back-propagation equation. We use a domain specific lan- guage to describe update equations as a list of primitive functions. An evolution-based method is used to discover new propagation rules that maximize the generalization per- formance after a few epochs of training. We find several update equations that can train faster with short training times than standard back-propagation, and perform similar as standard back-propagation at convergence.
doi_str_mv 10.48550/arxiv.1808.02822
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1808_02822</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1808_02822</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-e0db81dffb1141515cba6ae70a4783a9b56f9e28748f0e2d937aae9fb666226c3</originalsourceid><addsrcrecordid>eNotzrsOgjAYQOEuDgZd3Jz0BcDeL6MSvCQkLuzkL7QJEYVUJfr2RnQ628mH0JLghGsh8AbCqxkSorFOMNWUTtFiB9WlD12_zoaufT6a7jZDEw_t3c3_jVCxz4r0GOfnwynd5jFIRWOHa6tJ7b0lhBNBRGVBglMYuNIMjBXSG0e14tpjR2vDFIAz3kopKZUVi9Dqtx1RZR-aK4R3-cWVI459ACD0MxU</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Backprop Evolution</title><source>arXiv.org</source><creator>Alber, Maximilian ; Bello, Irwan ; Zoph, Barret ; Kindermans, Pieter-Jan ; Ramachandran, Prajit ; Le, Quoc</creator><creatorcontrib>Alber, Maximilian ; Bello, Irwan ; Zoph, Barret ; Kindermans, Pieter-Jan ; Ramachandran, Prajit ; Le, Quoc</creatorcontrib><description>The back-propagation algorithm is the cornerstone of deep learning. Despite its importance, few variations of the algorithm have been attempted. This work presents an approach to discover new variations of the back-propagation equation. We use a domain specific lan- guage to describe update equations as a list of primitive functions. An evolution-based method is used to discover new propagation rules that maximize the generalization per- formance after a few epochs of training. We find several update equations that can train faster with short training times than standard back-propagation, and perform similar as standard back-propagation at convergence.</description><identifier>DOI: 10.48550/arxiv.1808.02822</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Neural and Evolutionary Computing ; Statistics - Machine Learning</subject><creationdate>2018-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1808.02822$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1808.02822$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Alber, Maximilian</creatorcontrib><creatorcontrib>Bello, Irwan</creatorcontrib><creatorcontrib>Zoph, Barret</creatorcontrib><creatorcontrib>Kindermans, Pieter-Jan</creatorcontrib><creatorcontrib>Ramachandran, Prajit</creatorcontrib><creatorcontrib>Le, Quoc</creatorcontrib><title>Backprop Evolution</title><description>The back-propagation algorithm is the cornerstone of deep learning. Despite its importance, few variations of the algorithm have been attempted. This work presents an approach to discover new variations of the back-propagation equation. We use a domain specific lan- guage to describe update equations as a list of primitive functions. An evolution-based method is used to discover new propagation rules that maximize the generalization per- formance after a few epochs of training. We find several update equations that can train faster with short training times than standard back-propagation, and perform similar as standard back-propagation at convergence.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Neural and Evolutionary Computing</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrsOgjAYQOEuDgZd3Jz0BcDeL6MSvCQkLuzkL7QJEYVUJfr2RnQ628mH0JLghGsh8AbCqxkSorFOMNWUTtFiB9WlD12_zoaufT6a7jZDEw_t3c3_jVCxz4r0GOfnwynd5jFIRWOHa6tJ7b0lhBNBRGVBglMYuNIMjBXSG0e14tpjR2vDFIAz3kopKZUVi9Dqtx1RZR-aK4R3-cWVI459ACD0MxU</recordid><startdate>20180808</startdate><enddate>20180808</enddate><creator>Alber, Maximilian</creator><creator>Bello, Irwan</creator><creator>Zoph, Barret</creator><creator>Kindermans, Pieter-Jan</creator><creator>Ramachandran, Prajit</creator><creator>Le, Quoc</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20180808</creationdate><title>Backprop Evolution</title><author>Alber, Maximilian ; Bello, Irwan ; Zoph, Barret ; Kindermans, Pieter-Jan ; Ramachandran, Prajit ; Le, Quoc</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-e0db81dffb1141515cba6ae70a4783a9b56f9e28748f0e2d937aae9fb666226c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Neural and Evolutionary Computing</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Alber, Maximilian</creatorcontrib><creatorcontrib>Bello, Irwan</creatorcontrib><creatorcontrib>Zoph, Barret</creatorcontrib><creatorcontrib>Kindermans, Pieter-Jan</creatorcontrib><creatorcontrib>Ramachandran, Prajit</creatorcontrib><creatorcontrib>Le, Quoc</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Alber, Maximilian</au><au>Bello, Irwan</au><au>Zoph, Barret</au><au>Kindermans, Pieter-Jan</au><au>Ramachandran, Prajit</au><au>Le, Quoc</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Backprop Evolution</atitle><date>2018-08-08</date><risdate>2018</risdate><abstract>The back-propagation algorithm is the cornerstone of deep learning. Despite its importance, few variations of the algorithm have been attempted. This work presents an approach to discover new variations of the back-propagation equation. We use a domain specific lan- guage to describe update equations as a list of primitive functions. An evolution-based method is used to discover new propagation rules that maximize the generalization per- formance after a few epochs of training. We find several update equations that can train faster with short training times than standard back-propagation, and perform similar as standard back-propagation at convergence.</abstract><doi>10.48550/arxiv.1808.02822</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1808.02822
ispartof
issn
language eng
recordid cdi_arxiv_primary_1808_02822
source arXiv.org
subjects Computer Science - Learning
Computer Science - Neural and Evolutionary Computing
Statistics - Machine Learning
title Backprop Evolution
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T15%3A40%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Backprop%20Evolution&rft.au=Alber,%20Maximilian&rft.date=2018-08-08&rft_id=info:doi/10.48550/arxiv.1808.02822&rft_dat=%3Carxiv_GOX%3E1808_02822%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true