On the Kalman filtering method in neural network training and pruning

In the use of the extended Kalman filter approach in training and pruning a feedforward neural network, one usually encounters the problems of how to set the initial condition and how to use the result obtained to prune a neural network. In this paper, some cues on the setting of the initial conditi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on neural networks 1999-01, Vol.10 (1), p.161-166
Hauptverfasser: Sum, J., Chi-Sing Leung, Young, G.H., Wing-Kay Kan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 166
container_issue 1
container_start_page 161
container_title IEEE transactions on neural networks
container_volume 10
creator Sum, J.
Chi-Sing Leung
Young, G.H.
Wing-Kay Kan
description In the use of the extended Kalman filter approach in training and pruning a feedforward neural network, one usually encounters the problems of how to set the initial condition and how to use the result obtained to prune a neural network. In this paper, some cues on the setting of the initial condition are presented with a simple example illustrated. Then based on three assumptions: 1) the size of training set is large enough; 2) the training is able to converge; and 3) the trained network model is close to the actual one, an elegant equation linking the error sensitivity measure (the saliency) and the result obtained via an extended Kalman filter is devised. The validity of the devised equation is then testified by a simulated example.
doi_str_mv 10.1109/72.737502
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_pubmed_primary_18252512</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>737502</ieee_id><sourcerecordid>27038429</sourcerecordid><originalsourceid>FETCH-LOGICAL-c424t-45b9b8b5361bf8048a6d74743d32d5324aae0c91a4c7b020e9f6c947afe579eb3</originalsourceid><addsrcrecordid>eNqF0T1LxEAQBuBFFL8LWwtJIYpFdHZ2N5stRc4PPLhG67BJJhpNNudugvjvzXFBO61mYB7eKV7Gjjhccg7mSuOlFloBbrBdbiSPAYzYHHeQKjaIeofthfAGwKWCZJvt8BQVKo67bLZwUf9K0aNtWuuiqm568rV7iVrqX7syql3kaPC2GUf_2fn3qPe2dithXRkt_bDaD9hWZZtAh9PcZ8-3s6eb-3i-uHu4uZ7HhUTZx1LlJk9zJRKeVynI1CalllqKUmCpBEprCQrDrSx0DghkqqQwUtuKlDaUi312vs5d-u5joNBnbR0KahrrqBtCZrgxnGsQ_0otBGKqk5U8-1NiKhMJgP_D8W8q0YzwYg0L34XgqcqWvm6t_8o4ZKvCMo3ZurDRnkyhQ95S-SunhkZwOgEbCttU3rqiDr8ukXyMGtnxmtVE9HOdnnwDxeyi4g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>27038429</pqid></control><display><type>article</type><title>On the Kalman filtering method in neural network training and pruning</title><source>IEEE Electronic Library (IEL)</source><creator>Sum, J. ; Chi-Sing Leung ; Young, G.H. ; Wing-Kay Kan</creator><creatorcontrib>Sum, J. ; Chi-Sing Leung ; Young, G.H. ; Wing-Kay Kan</creatorcontrib><description>In the use of the extended Kalman filter approach in training and pruning a feedforward neural network, one usually encounters the problems of how to set the initial condition and how to use the result obtained to prune a neural network. In this paper, some cues on the setting of the initial condition are presented with a simple example illustrated. Then based on three assumptions: 1) the size of training set is large enough; 2) the training is able to converge; and 3) the trained network model is close to the actual one, an elegant equation linking the error sensitivity measure (the saliency) and the result obtained via an extended Kalman filter is devised. The validity of the devised equation is then testified by a simulated example.</description><identifier>ISSN: 1045-9227</identifier><identifier>EISSN: 1941-0093</identifier><identifier>DOI: 10.1109/72.737502</identifier><identifier>PMID: 18252512</identifier><identifier>CODEN: ITNNEP</identifier><language>eng</language><publisher>New York, NY: IEEE</publisher><subject>Applied sciences ; Biological neural networks ; Computer science ; Computer simulation ; Covariance matrix ; Electric, optical and optoelectronic circuits ; Electronics ; Equations ; Exact sciences and technology ; Extended Kalman filter ; Feedforward neural networks ; Filtering ; Initial conditions ; Kalman filters ; Mathematical analysis ; Mathematical models ; Multilayer perceptrons ; Neural networks ; Pruning ; Testing ; Training</subject><ispartof>IEEE transactions on neural networks, 1999-01, Vol.10 (1), p.161-166</ispartof><rights>1999 INIST-CNRS</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c424t-45b9b8b5361bf8048a6d74743d32d5324aae0c91a4c7b020e9f6c947afe579eb3</citedby><cites>FETCH-LOGICAL-c424t-45b9b8b5361bf8048a6d74743d32d5324aae0c91a4c7b020e9f6c947afe579eb3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/737502$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,4010,27900,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/737502$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=1641737$$DView record in Pascal Francis$$Hfree_for_read</backlink><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/18252512$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Sum, J.</creatorcontrib><creatorcontrib>Chi-Sing Leung</creatorcontrib><creatorcontrib>Young, G.H.</creatorcontrib><creatorcontrib>Wing-Kay Kan</creatorcontrib><title>On the Kalman filtering method in neural network training and pruning</title><title>IEEE transactions on neural networks</title><addtitle>TNN</addtitle><addtitle>IEEE Trans Neural Netw</addtitle><description>In the use of the extended Kalman filter approach in training and pruning a feedforward neural network, one usually encounters the problems of how to set the initial condition and how to use the result obtained to prune a neural network. In this paper, some cues on the setting of the initial condition are presented with a simple example illustrated. Then based on three assumptions: 1) the size of training set is large enough; 2) the training is able to converge; and 3) the trained network model is close to the actual one, an elegant equation linking the error sensitivity measure (the saliency) and the result obtained via an extended Kalman filter is devised. The validity of the devised equation is then testified by a simulated example.</description><subject>Applied sciences</subject><subject>Biological neural networks</subject><subject>Computer science</subject><subject>Computer simulation</subject><subject>Covariance matrix</subject><subject>Electric, optical and optoelectronic circuits</subject><subject>Electronics</subject><subject>Equations</subject><subject>Exact sciences and technology</subject><subject>Extended Kalman filter</subject><subject>Feedforward neural networks</subject><subject>Filtering</subject><subject>Initial conditions</subject><subject>Kalman filters</subject><subject>Mathematical analysis</subject><subject>Mathematical models</subject><subject>Multilayer perceptrons</subject><subject>Neural networks</subject><subject>Pruning</subject><subject>Testing</subject><subject>Training</subject><issn>1045-9227</issn><issn>1941-0093</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>1999</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNqF0T1LxEAQBuBFFL8LWwtJIYpFdHZ2N5stRc4PPLhG67BJJhpNNudugvjvzXFBO61mYB7eKV7Gjjhccg7mSuOlFloBbrBdbiSPAYzYHHeQKjaIeofthfAGwKWCZJvt8BQVKo67bLZwUf9K0aNtWuuiqm568rV7iVrqX7syql3kaPC2GUf_2fn3qPe2dithXRkt_bDaD9hWZZtAh9PcZ8-3s6eb-3i-uHu4uZ7HhUTZx1LlJk9zJRKeVynI1CalllqKUmCpBEprCQrDrSx0DghkqqQwUtuKlDaUi312vs5d-u5joNBnbR0KahrrqBtCZrgxnGsQ_0otBGKqk5U8-1NiKhMJgP_D8W8q0YzwYg0L34XgqcqWvm6t_8o4ZKvCMo3ZurDRnkyhQ95S-SunhkZwOgEbCttU3rqiDr8ukXyMGtnxmtVE9HOdnnwDxeyi4g</recordid><startdate>199901</startdate><enddate>199901</enddate><creator>Sum, J.</creator><creator>Chi-Sing Leung</creator><creator>Young, G.H.</creator><creator>Wing-Kay Kan</creator><general>IEEE</general><general>Institute of Electrical and Electronics Engineers</general><scope>RIA</scope><scope>RIE</scope><scope>IQODW</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><scope>7SP</scope><scope>F28</scope><scope>FR3</scope></search><sort><creationdate>199901</creationdate><title>On the Kalman filtering method in neural network training and pruning</title><author>Sum, J. ; Chi-Sing Leung ; Young, G.H. ; Wing-Kay Kan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c424t-45b9b8b5361bf8048a6d74743d32d5324aae0c91a4c7b020e9f6c947afe579eb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>1999</creationdate><topic>Applied sciences</topic><topic>Biological neural networks</topic><topic>Computer science</topic><topic>Computer simulation</topic><topic>Covariance matrix</topic><topic>Electric, optical and optoelectronic circuits</topic><topic>Electronics</topic><topic>Equations</topic><topic>Exact sciences and technology</topic><topic>Extended Kalman filter</topic><topic>Feedforward neural networks</topic><topic>Filtering</topic><topic>Initial conditions</topic><topic>Kalman filters</topic><topic>Mathematical analysis</topic><topic>Mathematical models</topic><topic>Multilayer perceptrons</topic><topic>Neural networks</topic><topic>Pruning</topic><topic>Testing</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Sum, J.</creatorcontrib><creatorcontrib>Chi-Sing Leung</creatorcontrib><creatorcontrib>Young, G.H.</creatorcontrib><creatorcontrib>Wing-Kay Kan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Pascal-Francis</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><jtitle>IEEE transactions on neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sum, J.</au><au>Chi-Sing Leung</au><au>Young, G.H.</au><au>Wing-Kay Kan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>On the Kalman filtering method in neural network training and pruning</atitle><jtitle>IEEE transactions on neural networks</jtitle><stitle>TNN</stitle><addtitle>IEEE Trans Neural Netw</addtitle><date>1999-01</date><risdate>1999</risdate><volume>10</volume><issue>1</issue><spage>161</spage><epage>166</epage><pages>161-166</pages><issn>1045-9227</issn><eissn>1941-0093</eissn><coden>ITNNEP</coden><abstract>In the use of the extended Kalman filter approach in training and pruning a feedforward neural network, one usually encounters the problems of how to set the initial condition and how to use the result obtained to prune a neural network. In this paper, some cues on the setting of the initial condition are presented with a simple example illustrated. Then based on three assumptions: 1) the size of training set is large enough; 2) the training is able to converge; and 3) the trained network model is close to the actual one, an elegant equation linking the error sensitivity measure (the saliency) and the result obtained via an extended Kalman filter is devised. The validity of the devised equation is then testified by a simulated example.</abstract><cop>New York, NY</cop><pub>IEEE</pub><pmid>18252512</pmid><doi>10.1109/72.737502</doi><tpages>6</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1045-9227
ispartof IEEE transactions on neural networks, 1999-01, Vol.10 (1), p.161-166
issn 1045-9227
1941-0093
language eng
recordid cdi_pubmed_primary_18252512
source IEEE Electronic Library (IEL)
subjects Applied sciences
Biological neural networks
Computer science
Computer simulation
Covariance matrix
Electric, optical and optoelectronic circuits
Electronics
Equations
Exact sciences and technology
Extended Kalman filter
Feedforward neural networks
Filtering
Initial conditions
Kalman filters
Mathematical analysis
Mathematical models
Multilayer perceptrons
Neural networks
Pruning
Testing
Training
title On the Kalman filtering method in neural network training and pruning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T21%3A14%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=On%20the%20Kalman%20filtering%20method%20in%20neural%20network%20training%20and%20pruning&rft.jtitle=IEEE%20transactions%20on%20neural%20networks&rft.au=Sum,%20J.&rft.date=1999-01&rft.volume=10&rft.issue=1&rft.spage=161&rft.epage=166&rft.pages=161-166&rft.issn=1045-9227&rft.eissn=1941-0093&rft.coden=ITNNEP&rft_id=info:doi/10.1109/72.737502&rft_dat=%3Cproquest_RIE%3E27038429%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=27038429&rft_id=info:pmid/18252512&rft_ieee_id=737502&rfr_iscdi=true