Back propagation with expected source values

The back propagation learning rule converges significantly faster if expected values of source units are used for updating weights. The expected value of a unit can be approximated as the sum of the output of the unit and its error term. Results from numerous simulations demonstrate the comparative...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 1991, Vol.4 (5), p.615-618
1. Verfasser: Samad, Tariq
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 618
container_issue 5
container_start_page 615
container_title Neural networks
container_volume 4
creator Samad, Tariq
description The back propagation learning rule converges significantly faster if expected values of source units are used for updating weights. The expected value of a unit can be approximated as the sum of the output of the unit and its error term. Results from numerous simulations demonstrate the comparative advantage of the new rule.
doi_str_mv 10.1016/0893-6080(91)90015-W
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_25113687</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>089360809190015W</els_id><sourcerecordid>25113687</sourcerecordid><originalsourceid>FETCH-LOGICAL-c364t-8c72f4ccfb1d3c142b2e392c3dbbe0120c8ae36e011a15867fd9cd6bc520e74a3</originalsourceid><addsrcrecordid>eNp9kEtPwzAQhC0EEqXwDzjkgBBIBPxIHPuCBBUvqRIXUI-Ws9mAIU2CnRb497i04shp9_DN7M4QcsjoOaNMXlClRSqpoieanWpKWZ7OtsiIqUKnvFB8m4z-kF2yF8IbpVSqTIzI2bWF96T3XW9f7OC6Nvl0w2uCXz3CgFUSuoUHTJa2WWDYJzu1bQIebOaYPN_ePE3u0-nj3cPkapqCkNmQKih4nQHUJasEsIyXHIXmIKqyRMo4BWVRyLgyy3Ili7rSUMkSck6xyKwYk-O1b_zrI94dzNwFwKaxLXaLYHjOmJCqiGC2BsF3IXisTe_d3Ppvw6hZVWNWuc0qt9HM_FZjZlF2tPG3AWxTe9uCC3_a6M60ziJ2ucYwZl069CaAwxawcj62Y6rO_X_nB3Ktd1E</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>25113687</pqid></control><display><type>article</type><title>Back propagation with expected source values</title><source>Elsevier ScienceDirect Journals</source><creator>Samad, Tariq</creator><creatorcontrib>Samad, Tariq</creatorcontrib><description>The back propagation learning rule converges significantly faster if expected values of source units are used for updating weights. The expected value of a unit can be approximated as the sum of the output of the unit and its error term. Results from numerous simulations demonstrate the comparative advantage of the new rule.</description><identifier>ISSN: 0893-6080</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/0893-6080(91)90015-W</identifier><language>eng</language><publisher>Oxford: Elsevier Ltd</publisher><subject>Applied sciences ; Artificial intelligence ; Back propagation ; Computer science; control theory; systems ; Connectionism. Neural networks ; Exact sciences and technology ; Neural networks ; Supervised learning</subject><ispartof>Neural networks, 1991, Vol.4 (5), p.615-618</ispartof><rights>1991</rights><rights>1992 INIST-CNRS</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c364t-8c72f4ccfb1d3c142b2e392c3dbbe0120c8ae36e011a15867fd9cd6bc520e74a3</citedby><cites>FETCH-LOGICAL-c364t-8c72f4ccfb1d3c142b2e392c3dbbe0120c8ae36e011a15867fd9cd6bc520e74a3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/089360809190015W$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,776,780,3536,4009,27902,27903,27904,65309</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=5111994$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><creatorcontrib>Samad, Tariq</creatorcontrib><title>Back propagation with expected source values</title><title>Neural networks</title><description>The back propagation learning rule converges significantly faster if expected values of source units are used for updating weights. The expected value of a unit can be approximated as the sum of the output of the unit and its error term. Results from numerous simulations demonstrate the comparative advantage of the new rule.</description><subject>Applied sciences</subject><subject>Artificial intelligence</subject><subject>Back propagation</subject><subject>Computer science; control theory; systems</subject><subject>Connectionism. Neural networks</subject><subject>Exact sciences and technology</subject><subject>Neural networks</subject><subject>Supervised learning</subject><issn>0893-6080</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>1991</creationdate><recordtype>article</recordtype><recordid>eNp9kEtPwzAQhC0EEqXwDzjkgBBIBPxIHPuCBBUvqRIXUI-Ws9mAIU2CnRb497i04shp9_DN7M4QcsjoOaNMXlClRSqpoieanWpKWZ7OtsiIqUKnvFB8m4z-kF2yF8IbpVSqTIzI2bWF96T3XW9f7OC6Nvl0w2uCXz3CgFUSuoUHTJa2WWDYJzu1bQIebOaYPN_ePE3u0-nj3cPkapqCkNmQKih4nQHUJasEsIyXHIXmIKqyRMo4BWVRyLgyy3Ili7rSUMkSck6xyKwYk-O1b_zrI94dzNwFwKaxLXaLYHjOmJCqiGC2BsF3IXisTe_d3Ppvw6hZVWNWuc0qt9HM_FZjZlF2tPG3AWxTe9uCC3_a6M60ziJ2ucYwZl069CaAwxawcj62Y6rO_X_nB3Ktd1E</recordid><startdate>1991</startdate><enddate>1991</enddate><creator>Samad, Tariq</creator><general>Elsevier Ltd</general><general>Elsevier Science</general><scope>IQODW</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>1991</creationdate><title>Back propagation with expected source values</title><author>Samad, Tariq</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c364t-8c72f4ccfb1d3c142b2e392c3dbbe0120c8ae36e011a15867fd9cd6bc520e74a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>1991</creationdate><topic>Applied sciences</topic><topic>Artificial intelligence</topic><topic>Back propagation</topic><topic>Computer science; control theory; systems</topic><topic>Connectionism. Neural networks</topic><topic>Exact sciences and technology</topic><topic>Neural networks</topic><topic>Supervised learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Samad, Tariq</creatorcontrib><collection>Pascal-Francis</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Samad, Tariq</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Back propagation with expected source values</atitle><jtitle>Neural networks</jtitle><date>1991</date><risdate>1991</risdate><volume>4</volume><issue>5</issue><spage>615</spage><epage>618</epage><pages>615-618</pages><issn>0893-6080</issn><eissn>1879-2782</eissn><abstract>The back propagation learning rule converges significantly faster if expected values of source units are used for updating weights. The expected value of a unit can be approximated as the sum of the output of the unit and its error term. Results from numerous simulations demonstrate the comparative advantage of the new rule.</abstract><cop>Oxford</cop><pub>Elsevier Ltd</pub><doi>10.1016/0893-6080(91)90015-W</doi><tpages>4</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0893-6080
ispartof Neural networks, 1991, Vol.4 (5), p.615-618
issn 0893-6080
1879-2782
language eng
recordid cdi_proquest_miscellaneous_25113687
source Elsevier ScienceDirect Journals
subjects Applied sciences
Artificial intelligence
Back propagation
Computer science
control theory
systems
Connectionism. Neural networks
Exact sciences and technology
Neural networks
Supervised learning
title Back propagation with expected source values
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T17%3A16%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Back%20propagation%20with%20expected%20source%20values&rft.jtitle=Neural%20networks&rft.au=Samad,%20Tariq&rft.date=1991&rft.volume=4&rft.issue=5&rft.spage=615&rft.epage=618&rft.pages=615-618&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/0893-6080(91)90015-W&rft_dat=%3Cproquest_cross%3E25113687%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=25113687&rft_id=info:pmid/&rft_els_id=089360809190015W&rfr_iscdi=true