The research on the relation of self-learning ratio and the convergence speed in BP networks
The relation of self-learning ratio and the convergence speed in BP network is proposed in this paper. In theory, only when the self-learning ratio /spl muspl rarr/0, the real gradient descent can be got, and the computation will converge to a certain local minimum point. But, a too small /spl mu/ w...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 134 vol.1 |
---|---|
container_issue | |
container_start_page | 131 |
container_title | |
container_volume | |
creator | Weining Wen Sixing Liu Zhaoying Zhou |
description | The relation of self-learning ratio and the convergence speed in BP network is proposed in this paper. In theory, only when the self-learning ratio /spl muspl rarr/0, the real gradient descent can be got, and the computation will converge to a certain local minimum point. But, a too small /spl mu/ will cause a slow convergence speed and a too large /spl mu/ may cause divergence. On the base of mathematical analysis and some computer simulations, the relation formula is given out as follows: n=ln[/spl epsi|W(0)-W*|]/ln(1-/spl mu/a) where n is the amount of iterative, /spl mu/ is self-learning ratio, w(0) is the original weight and w* is the best weight, /spl epsi/ is the precision requirement, a is the slope of gradient imitative straight line. It is also proposed for a method to determine a better self-learning ratio.< > |
doi_str_mv | 10.1109/IMTC.1994.352107 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_352107</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>352107</ieee_id><sourcerecordid>352107</sourcerecordid><originalsourceid>FETCH-LOGICAL-i174t-bdae706b9530e4d4d832e6027bf135432c05f84b4076da56e1fe55fa0c39b3d63</originalsourceid><addsrcrecordid>eNotUE1LxDAUDIigrHsXT_kDrS9N0iRHLX4srOih3oQlbV52ozVdkqL47627DgPDzDzeYQi5ZFAyBuZ69dQ2JTNGlFxWDNQJWRqlYSZnWoM5I8uc32GGlFoIOCdv7Q5pwow29Ts6Rjod_GCnMJvR04yDL4a5jiFuafrLqY3ucNeP8QvTFmOPNO8RHQ2R3r7QiNP3mD7yBTn1dsi4_NcFeb2_a5vHYv38sGpu1kVgSkxF5ywqqDsjOaBwwmleYQ2V6jzjUvCqB-m16ASo2llZI_MopbfQc9NxV_MFuTr-DYi42afwadPP5jgB_wWMgVGg</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>The research on the relation of self-learning ratio and the convergence speed in BP networks</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Weining Wen ; Sixing Liu ; Zhaoying Zhou</creator><creatorcontrib>Weining Wen ; Sixing Liu ; Zhaoying Zhou</creatorcontrib><description>The relation of self-learning ratio and the convergence speed in BP network is proposed in this paper. In theory, only when the self-learning ratio /spl muspl rarr/0, the real gradient descent can be got, and the computation will converge to a certain local minimum point. But, a too small /spl mu/ will cause a slow convergence speed and a too large /spl mu/ may cause divergence. On the base of mathematical analysis and some computer simulations, the relation formula is given out as follows: n=ln[/spl epsi|W(0)-W*|]/ln(1-/spl mu/a) where n is the amount of iterative, /spl mu/ is self-learning ratio, w(0) is the original weight and w* is the best weight, /spl epsi/ is the precision requirement, a is the slope of gradient imitative straight line. It is also proposed for a method to determine a better self-learning ratio.< ></description><identifier>ISBN: 9780780318809</identifier><identifier>ISBN: 0780318803</identifier><identifier>DOI: 10.1109/IMTC.1994.352107</identifier><language>eng</language><publisher>IEEE</publisher><subject>Artificial neural networks ; Computer networks ; Computer simulation ; Convergence ; Feedforward neural networks ; Intelligent networks ; Mathematical analysis ; Motion control ; Neural networks ; Neurons</subject><ispartof>Conference Proceedings. 10th Anniversary. IMTC/94. Advanced Technologies in I & M. 1994 IEEE Instrumentation and Measurement Technolgy Conference (Cat. No.94CH3424-9), 1994, p.131-134 vol.1</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/352107$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,2056,4048,4049,27924,54919</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/352107$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Weining Wen</creatorcontrib><creatorcontrib>Sixing Liu</creatorcontrib><creatorcontrib>Zhaoying Zhou</creatorcontrib><title>The research on the relation of self-learning ratio and the convergence speed in BP networks</title><title>Conference Proceedings. 10th Anniversary. IMTC/94. Advanced Technologies in I & M. 1994 IEEE Instrumentation and Measurement Technolgy Conference (Cat. No.94CH3424-9)</title><addtitle>IMTC</addtitle><description>The relation of self-learning ratio and the convergence speed in BP network is proposed in this paper. In theory, only when the self-learning ratio /spl muspl rarr/0, the real gradient descent can be got, and the computation will converge to a certain local minimum point. But, a too small /spl mu/ will cause a slow convergence speed and a too large /spl mu/ may cause divergence. On the base of mathematical analysis and some computer simulations, the relation formula is given out as follows: n=ln[/spl epsi|W(0)-W*|]/ln(1-/spl mu/a) where n is the amount of iterative, /spl mu/ is self-learning ratio, w(0) is the original weight and w* is the best weight, /spl epsi/ is the precision requirement, a is the slope of gradient imitative straight line. It is also proposed for a method to determine a better self-learning ratio.< ></description><subject>Artificial neural networks</subject><subject>Computer networks</subject><subject>Computer simulation</subject><subject>Convergence</subject><subject>Feedforward neural networks</subject><subject>Intelligent networks</subject><subject>Mathematical analysis</subject><subject>Motion control</subject><subject>Neural networks</subject><subject>Neurons</subject><isbn>9780780318809</isbn><isbn>0780318803</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>1994</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotUE1LxDAUDIigrHsXT_kDrS9N0iRHLX4srOih3oQlbV52ozVdkqL47627DgPDzDzeYQi5ZFAyBuZ69dQ2JTNGlFxWDNQJWRqlYSZnWoM5I8uc32GGlFoIOCdv7Q5pwow29Ts6Rjod_GCnMJvR04yDL4a5jiFuafrLqY3ucNeP8QvTFmOPNO8RHQ2R3r7QiNP3mD7yBTn1dsi4_NcFeb2_a5vHYv38sGpu1kVgSkxF5ywqqDsjOaBwwmleYQ2V6jzjUvCqB-m16ASo2llZI_MopbfQc9NxV_MFuTr-DYi42afwadPP5jgB_wWMgVGg</recordid><startdate>1994</startdate><enddate>1994</enddate><creator>Weining Wen</creator><creator>Sixing Liu</creator><creator>Zhaoying Zhou</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>1994</creationdate><title>The research on the relation of self-learning ratio and the convergence speed in BP networks</title><author>Weining Wen ; Sixing Liu ; Zhaoying Zhou</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i174t-bdae706b9530e4d4d832e6027bf135432c05f84b4076da56e1fe55fa0c39b3d63</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>1994</creationdate><topic>Artificial neural networks</topic><topic>Computer networks</topic><topic>Computer simulation</topic><topic>Convergence</topic><topic>Feedforward neural networks</topic><topic>Intelligent networks</topic><topic>Mathematical analysis</topic><topic>Motion control</topic><topic>Neural networks</topic><topic>Neurons</topic><toplevel>online_resources</toplevel><creatorcontrib>Weining Wen</creatorcontrib><creatorcontrib>Sixing Liu</creatorcontrib><creatorcontrib>Zhaoying Zhou</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Weining Wen</au><au>Sixing Liu</au><au>Zhaoying Zhou</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>The research on the relation of self-learning ratio and the convergence speed in BP networks</atitle><btitle>Conference Proceedings. 10th Anniversary. IMTC/94. Advanced Technologies in I & M. 1994 IEEE Instrumentation and Measurement Technolgy Conference (Cat. No.94CH3424-9)</btitle><stitle>IMTC</stitle><date>1994</date><risdate>1994</risdate><spage>131</spage><epage>134 vol.1</epage><pages>131-134 vol.1</pages><isbn>9780780318809</isbn><isbn>0780318803</isbn><abstract>The relation of self-learning ratio and the convergence speed in BP network is proposed in this paper. In theory, only when the self-learning ratio /spl muspl rarr/0, the real gradient descent can be got, and the computation will converge to a certain local minimum point. But, a too small /spl mu/ will cause a slow convergence speed and a too large /spl mu/ may cause divergence. On the base of mathematical analysis and some computer simulations, the relation formula is given out as follows: n=ln[/spl epsi|W(0)-W*|]/ln(1-/spl mu/a) where n is the amount of iterative, /spl mu/ is self-learning ratio, w(0) is the original weight and w* is the best weight, /spl epsi/ is the precision requirement, a is the slope of gradient imitative straight line. It is also proposed for a method to determine a better self-learning ratio.< ></abstract><pub>IEEE</pub><doi>10.1109/IMTC.1994.352107</doi></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISBN: 9780780318809 |
ispartof | Conference Proceedings. 10th Anniversary. IMTC/94. Advanced Technologies in I & M. 1994 IEEE Instrumentation and Measurement Technolgy Conference (Cat. No.94CH3424-9), 1994, p.131-134 vol.1 |
issn | |
language | eng |
recordid | cdi_ieee_primary_352107 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | Artificial neural networks Computer networks Computer simulation Convergence Feedforward neural networks Intelligent networks Mathematical analysis Motion control Neural networks Neurons |
title | The research on the relation of self-learning ratio and the convergence speed in BP networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T14%3A19%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=The%20research%20on%20the%20relation%20of%20self-learning%20ratio%20and%20the%20convergence%20speed%20in%20BP%20networks&rft.btitle=Conference%20Proceedings.%2010th%20Anniversary.%20IMTC/94.%20Advanced%20Technologies%20in%20I%20&%20M.%201994%20IEEE%20Instrumentation%20and%20Measurement%20Technolgy%20Conference%20(Cat.%20No.94CH3424-9)&rft.au=Weining%20Wen&rft.date=1994&rft.spage=131&rft.epage=134%20vol.1&rft.pages=131-134%20vol.1&rft.isbn=9780780318809&rft.isbn_list=0780318803&rft_id=info:doi/10.1109/IMTC.1994.352107&rft_dat=%3Cieee_6IE%3E352107%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=352107&rfr_iscdi=true |