Stability Analysis and the Stabilization of a Class of Discrete-Time Dynamic Neural Networks

This paper deals with problems of stability and the stabilization of discrete-time neural networks. Neural structures under consideration belong to the class of the so-called locally recurrent globally feedforward networks. The single processing unit possesses dynamic behavior. It is realized by int...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2007-05, Vol.18 (3), p.660-673
1. Verfasser: Patan, K.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 673
container_issue 3
container_start_page 660
container_title IEEE transaction on neural networks and learning systems
container_volume 18
creator Patan, K.
description This paper deals with problems of stability and the stabilization of discrete-time neural networks. Neural structures under consideration belong to the class of the so-called locally recurrent globally feedforward networks. The single processing unit possesses dynamic behavior. It is realized by introducing into the neuron structure a linear dynamic system in the form of an infinite impulse response filter. In this way, a dynamic neural network is obtained. It is well known that the crucial problem with neural networks of the dynamic type is stability as well as stabilization in learning problems. The paper formulates stability conditions for the analyzed class of neural networks. Moreover, a stabilization problem is defined and solved as a constrained optimization task. In order to tackle this problem two methods are proposed. The first one is based on a gradient projection (GP) and the second one on a minimum distance projection (MDP). It is worth noting that these methods can be easily introduced into the existing learning algorithm as an additional step, and suitable convergence conditions can be developed for them. The efficiency and usefulness of the proposed approaches are justified by using a number of experiments including numerical complexity analysis, stabilization effectiveness, and the identification of an industrial process
doi_str_mv 10.1109/TNN.2007.891199
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TNN_2007_891199</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>4182402</ieee_id><sourcerecordid>70550541</sourcerecordid><originalsourceid>FETCH-LOGICAL-c435t-79e73187d1b7b78b6e19a3ff00126b5445a889c253c0fa953641927383abece23</originalsourceid><addsrcrecordid>eNqFkc9rFDEUx4Motq6ePQgSBPU027z8zrFstRbKenC9CSGTzWDq_GiTGWT9682wg4Ue7Ok9eJ-8fHkfhF4DWQMQc7bbbteUELXWBsCYJ-gUDIeKEMOelp5wURlK1Ql6kfMNIcAFkc_RCShBJWP8FP34Nro6tnE84PPetYccM3b9Ho8_A15Gf9wYhx4PDXZ407qc5_YiZp_CGKpd7AK-OPSuix5vw5RcW8r4e0i_8kv0rHFtDq-WukLfP3_abb5U118vrzbn15XnTIyVMkEx0GoPtaqVrmUA41jTlLxU1oJz4bQ2ngrmSeOMYJKDoYpp5urgA2Ur9PG49zYNd1PIo-1KvNC2rg_DlK3WREomlCzkh_-SighBBIdHQTDCMAnz3-8egDfDlMopszVAKXBVkq7Q2RHyacg5hcbepti5dLBA7CzSFpF2FmmPIsuLt8vaqe7C_p5fzBXg_QK47F3bJNf7mO85rZigfM735sjFEMK_MQdNOaHsL_DOq-0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>912214727</pqid></control><display><type>article</type><title>Stability Analysis and the Stabilization of a Class of Discrete-Time Dynamic Neural Networks</title><source>IEEE Electronic Library (IEL)</source><creator>Patan, K.</creator><creatorcontrib>Patan, K.</creatorcontrib><description>This paper deals with problems of stability and the stabilization of discrete-time neural networks. Neural structures under consideration belong to the class of the so-called locally recurrent globally feedforward networks. The single processing unit possesses dynamic behavior. It is realized by introducing into the neuron structure a linear dynamic system in the form of an infinite impulse response filter. In this way, a dynamic neural network is obtained. It is well known that the crucial problem with neural networks of the dynamic type is stability as well as stabilization in learning problems. The paper formulates stability conditions for the analyzed class of neural networks. Moreover, a stabilization problem is defined and solved as a constrained optimization task. In order to tackle this problem two methods are proposed. The first one is based on a gradient projection (GP) and the second one on a minimum distance projection (MDP). It is worth noting that these methods can be easily introduced into the existing learning algorithm as an additional step, and suitable convergence conditions can be developed for them. The efficiency and usefulness of the proposed approaches are justified by using a number of experiments including numerical complexity analysis, stabilization effectiveness, and the identification of an industrial process</description><identifier>ISSN: 1045-9227</identifier><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 1941-0093</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNN.2007.891199</identifier><identifier>PMID: 17526334</identifier><identifier>CODEN: ITNNEP</identifier><language>eng</language><publisher>New York, NY: IEEE</publisher><subject>Algorithms ; Applied sciences ; Artificial Intelligence ; Computer science; control theory; systems ; Computer Simulation ; Computer systems and distributed systems. User interface ; Constrained optimization ; Constraint optimization ; Convergence ; Delay lines ; dynamic neural network ; Dynamical systems ; Dynamics ; Exact sciences and technology ; IIR filters ; Information Storage and Retrieval - methods ; Learning ; Mathematical models ; Models, Theoretical ; Multi-layer neural network ; Multilayer perceptrons ; Neural networks ; Neural Networks (Computer) ; Neurons ; Projection ; Signal Processing, Computer-Assisted ; Software ; Stability ; Stability analysis ; Stabilization ; stochastic approximation ; Studies ; System identification</subject><ispartof>IEEE transaction on neural networks and learning systems, 2007-05, Vol.18 (3), p.660-673</ispartof><rights>2007 INIST-CNRS</rights><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2007</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c435t-79e73187d1b7b78b6e19a3ff00126b5445a889c253c0fa953641927383abece23</citedby><cites>FETCH-LOGICAL-c435t-79e73187d1b7b78b6e19a3ff00126b5445a889c253c0fa953641927383abece23</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/4182402$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/4182402$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=18735242$$DView record in Pascal Francis$$Hfree_for_read</backlink><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/17526334$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Patan, K.</creatorcontrib><title>Stability Analysis and the Stabilization of a Class of Discrete-Time Dynamic Neural Networks</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNN</addtitle><addtitle>IEEE Trans Neural Netw</addtitle><description>This paper deals with problems of stability and the stabilization of discrete-time neural networks. Neural structures under consideration belong to the class of the so-called locally recurrent globally feedforward networks. The single processing unit possesses dynamic behavior. It is realized by introducing into the neuron structure a linear dynamic system in the form of an infinite impulse response filter. In this way, a dynamic neural network is obtained. It is well known that the crucial problem with neural networks of the dynamic type is stability as well as stabilization in learning problems. The paper formulates stability conditions for the analyzed class of neural networks. Moreover, a stabilization problem is defined and solved as a constrained optimization task. In order to tackle this problem two methods are proposed. The first one is based on a gradient projection (GP) and the second one on a minimum distance projection (MDP). It is worth noting that these methods can be easily introduced into the existing learning algorithm as an additional step, and suitable convergence conditions can be developed for them. The efficiency and usefulness of the proposed approaches are justified by using a number of experiments including numerical complexity analysis, stabilization effectiveness, and the identification of an industrial process</description><subject>Algorithms</subject><subject>Applied sciences</subject><subject>Artificial Intelligence</subject><subject>Computer science; control theory; systems</subject><subject>Computer Simulation</subject><subject>Computer systems and distributed systems. User interface</subject><subject>Constrained optimization</subject><subject>Constraint optimization</subject><subject>Convergence</subject><subject>Delay lines</subject><subject>dynamic neural network</subject><subject>Dynamical systems</subject><subject>Dynamics</subject><subject>Exact sciences and technology</subject><subject>IIR filters</subject><subject>Information Storage and Retrieval - methods</subject><subject>Learning</subject><subject>Mathematical models</subject><subject>Models, Theoretical</subject><subject>Multi-layer neural network</subject><subject>Multilayer perceptrons</subject><subject>Neural networks</subject><subject>Neural Networks (Computer)</subject><subject>Neurons</subject><subject>Projection</subject><subject>Signal Processing, Computer-Assisted</subject><subject>Software</subject><subject>Stability</subject><subject>Stability analysis</subject><subject>Stabilization</subject><subject>stochastic approximation</subject><subject>Studies</subject><subject>System identification</subject><issn>1045-9227</issn><issn>2162-237X</issn><issn>1941-0093</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2007</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>EIF</sourceid><recordid>eNqFkc9rFDEUx4Motq6ePQgSBPU027z8zrFstRbKenC9CSGTzWDq_GiTGWT9682wg4Ue7Ok9eJ-8fHkfhF4DWQMQc7bbbteUELXWBsCYJ-gUDIeKEMOelp5wURlK1Ql6kfMNIcAFkc_RCShBJWP8FP34Nro6tnE84PPetYccM3b9Ho8_A15Gf9wYhx4PDXZ407qc5_YiZp_CGKpd7AK-OPSuix5vw5RcW8r4e0i_8kv0rHFtDq-WukLfP3_abb5U118vrzbn15XnTIyVMkEx0GoPtaqVrmUA41jTlLxU1oJz4bQ2ngrmSeOMYJKDoYpp5urgA2Ur9PG49zYNd1PIo-1KvNC2rg_DlK3WREomlCzkh_-SighBBIdHQTDCMAnz3-8egDfDlMopszVAKXBVkq7Q2RHyacg5hcbepti5dLBA7CzSFpF2FmmPIsuLt8vaqe7C_p5fzBXg_QK47F3bJNf7mO85rZigfM735sjFEMK_MQdNOaHsL_DOq-0</recordid><startdate>20070501</startdate><enddate>20070501</enddate><creator>Patan, K.</creator><general>IEEE</general><general>Institute of Electrical and Electronics Engineers</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>IQODW</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QP</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>P64</scope><scope>7X8</scope></search><sort><creationdate>20070501</creationdate><title>Stability Analysis and the Stabilization of a Class of Discrete-Time Dynamic Neural Networks</title><author>Patan, K.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c435t-79e73187d1b7b78b6e19a3ff00126b5445a889c253c0fa953641927383abece23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2007</creationdate><topic>Algorithms</topic><topic>Applied sciences</topic><topic>Artificial Intelligence</topic><topic>Computer science; control theory; systems</topic><topic>Computer Simulation</topic><topic>Computer systems and distributed systems. User interface</topic><topic>Constrained optimization</topic><topic>Constraint optimization</topic><topic>Convergence</topic><topic>Delay lines</topic><topic>dynamic neural network</topic><topic>Dynamical systems</topic><topic>Dynamics</topic><topic>Exact sciences and technology</topic><topic>IIR filters</topic><topic>Information Storage and Retrieval - methods</topic><topic>Learning</topic><topic>Mathematical models</topic><topic>Models, Theoretical</topic><topic>Multi-layer neural network</topic><topic>Multilayer perceptrons</topic><topic>Neural networks</topic><topic>Neural Networks (Computer)</topic><topic>Neurons</topic><topic>Projection</topic><topic>Signal Processing, Computer-Assisted</topic><topic>Software</topic><topic>Stability</topic><topic>Stability analysis</topic><topic>Stabilization</topic><topic>stochastic approximation</topic><topic>Studies</topic><topic>System identification</topic><toplevel>online_resources</toplevel><creatorcontrib>Patan, K.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Pascal-Francis</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium &amp; Calcified Tissue Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Patan, K.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Stability Analysis and the Stabilization of a Class of Discrete-Time Dynamic Neural Networks</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNN</stitle><addtitle>IEEE Trans Neural Netw</addtitle><date>2007-05-01</date><risdate>2007</risdate><volume>18</volume><issue>3</issue><spage>660</spage><epage>673</epage><pages>660-673</pages><issn>1045-9227</issn><issn>2162-237X</issn><eissn>1941-0093</eissn><eissn>2162-2388</eissn><coden>ITNNEP</coden><abstract>This paper deals with problems of stability and the stabilization of discrete-time neural networks. Neural structures under consideration belong to the class of the so-called locally recurrent globally feedforward networks. The single processing unit possesses dynamic behavior. It is realized by introducing into the neuron structure a linear dynamic system in the form of an infinite impulse response filter. In this way, a dynamic neural network is obtained. It is well known that the crucial problem with neural networks of the dynamic type is stability as well as stabilization in learning problems. The paper formulates stability conditions for the analyzed class of neural networks. Moreover, a stabilization problem is defined and solved as a constrained optimization task. In order to tackle this problem two methods are proposed. The first one is based on a gradient projection (GP) and the second one on a minimum distance projection (MDP). It is worth noting that these methods can be easily introduced into the existing learning algorithm as an additional step, and suitable convergence conditions can be developed for them. The efficiency and usefulness of the proposed approaches are justified by using a number of experiments including numerical complexity analysis, stabilization effectiveness, and the identification of an industrial process</abstract><cop>New York, NY</cop><pub>IEEE</pub><pmid>17526334</pmid><doi>10.1109/TNN.2007.891199</doi><tpages>14</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1045-9227
ispartof IEEE transaction on neural networks and learning systems, 2007-05, Vol.18 (3), p.660-673
issn 1045-9227
2162-237X
1941-0093
2162-2388
language eng
recordid cdi_crossref_primary_10_1109_TNN_2007_891199
source IEEE Electronic Library (IEL)
subjects Algorithms
Applied sciences
Artificial Intelligence
Computer science
control theory
systems
Computer Simulation
Computer systems and distributed systems. User interface
Constrained optimization
Constraint optimization
Convergence
Delay lines
dynamic neural network
Dynamical systems
Dynamics
Exact sciences and technology
IIR filters
Information Storage and Retrieval - methods
Learning
Mathematical models
Models, Theoretical
Multi-layer neural network
Multilayer perceptrons
Neural networks
Neural Networks (Computer)
Neurons
Projection
Signal Processing, Computer-Assisted
Software
Stability
Stability analysis
Stabilization
stochastic approximation
Studies
System identification
title Stability Analysis and the Stabilization of a Class of Discrete-Time Dynamic Neural Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T09%3A39%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Stability%20Analysis%20and%20the%20Stabilization%20of%20a%20Class%20of%20Discrete-Time%20Dynamic%20Neural%20Networks&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Patan,%20K.&rft.date=2007-05-01&rft.volume=18&rft.issue=3&rft.spage=660&rft.epage=673&rft.pages=660-673&rft.issn=1045-9227&rft.eissn=1941-0093&rft.coden=ITNNEP&rft_id=info:doi/10.1109/TNN.2007.891199&rft_dat=%3Cproquest_RIE%3E70550541%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=912214727&rft_id=info:pmid/17526334&rft_ieee_id=4182402&rfr_iscdi=true