Two-Stage Orthogonal Least Squares Methods for Neural Network Construction
A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset...
Gespeichert in:
Veröffentlicht in: | IEEE transaction on neural networks and learning systems 2015-08, Vol.26 (8), p.1608-1621 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1621 |
---|---|
container_issue | 8 |
container_start_page | 1608 |
container_title | IEEE transaction on neural networks and learning systems |
container_volume | 26 |
creator | Long Zhang Kang Li Er-Wei Bai Irwin, George W. |
description | A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods. |
doi_str_mv | 10.1109/TNNLS.2014.2346399 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TNNLS_2014_2346399</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6895303</ieee_id><sourcerecordid>1697221046</sourcerecordid><originalsourceid>FETCH-LOGICAL-c393t-fc5e96e3728e097182cd810ac0356bc2e146b3f0b8b486915ab13107dc5bc9db3</originalsourceid><addsrcrecordid>eNo9kMtOwzAQRS0Eoqj0B0BCWbJJ8SN27CWqeKqkixaJXeQ4kxJI49Z2VPH3pLR0NjPSPXcWB6ErgseEYHW3yLLpfEwxScaUJYIpdYIuKBE0pkzK0-OdfgzQyPsv3I_AXCTqHA0op5QqLi7Q62Jr43nQS4hmLnzapW11E01B-xDNN5124KM36IPSR5V1UQad64EMwta672hiWx9cZ0Jt20t0VunGw-iwh-j98WExeY6ns6eXyf00NkyxEFeGgxLAUioBq5RIakpJsDaYcVEYCiQRBatwIYtECkW4LggjOC0NL4wqCzZEt_u_a2c3HfiQr2pvoGl0C7bzOREqpZTgXsoQ0T1qnPXeQZWvXb3S7icnON9pzP805juN-UFjX7o5_O-KFZTHyr-0HrjeAzUAHGMhFWeYsV_S7XZf</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1697221046</pqid></control><display><type>article</type><title>Two-Stage Orthogonal Least Squares Methods for Neural Network Construction</title><source>IEEE Electronic Library (IEL)</source><creator>Long Zhang ; Kang Li ; Er-Wei Bai ; Irwin, George W.</creator><creatorcontrib>Long Zhang ; Kang Li ; Er-Wei Bai ; Irwin, George W.</creatorcontrib><description>A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.</description><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2014.2346399</identifier><identifier>PMID: 25222956</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Algorithms ; Artificial Intelligence ; Backward model refinement ; computational complexity ; Computational modeling ; Cost function ; forward selection ; Least squares methods ; Least-Squares Analysis ; linear-in-the-parameters model ; Matching pursuit algorithms ; Models, Theoretical ; Neural networks ; Neural Networks, Computer ; Numerical models ; orthogonal least square (OLS) ; Vectors</subject><ispartof>IEEE transaction on neural networks and learning systems, 2015-08, Vol.26 (8), p.1608-1621</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c393t-fc5e96e3728e097182cd810ac0356bc2e146b3f0b8b486915ab13107dc5bc9db3</citedby><cites>FETCH-LOGICAL-c393t-fc5e96e3728e097182cd810ac0356bc2e146b3f0b8b486915ab13107dc5bc9db3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6895303$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6895303$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/25222956$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Long Zhang</creatorcontrib><creatorcontrib>Kang Li</creatorcontrib><creatorcontrib>Er-Wei Bai</creatorcontrib><creatorcontrib>Irwin, George W.</creatorcontrib><title>Two-Stage Orthogonal Least Squares Methods for Neural Network Construction</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description>A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Backward model refinement</subject><subject>computational complexity</subject><subject>Computational modeling</subject><subject>Cost function</subject><subject>forward selection</subject><subject>Least squares methods</subject><subject>Least-Squares Analysis</subject><subject>linear-in-the-parameters model</subject><subject>Matching pursuit algorithms</subject><subject>Models, Theoretical</subject><subject>Neural networks</subject><subject>Neural Networks, Computer</subject><subject>Numerical models</subject><subject>orthogonal least square (OLS)</subject><subject>Vectors</subject><issn>2162-237X</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2015</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>EIF</sourceid><recordid>eNo9kMtOwzAQRS0Eoqj0B0BCWbJJ8SN27CWqeKqkixaJXeQ4kxJI49Z2VPH3pLR0NjPSPXcWB6ErgseEYHW3yLLpfEwxScaUJYIpdYIuKBE0pkzK0-OdfgzQyPsv3I_AXCTqHA0op5QqLi7Q62Jr43nQS4hmLnzapW11E01B-xDNN5124KM36IPSR5V1UQad64EMwta672hiWx9cZ0Jt20t0VunGw-iwh-j98WExeY6ns6eXyf00NkyxEFeGgxLAUioBq5RIakpJsDaYcVEYCiQRBatwIYtECkW4LggjOC0NL4wqCzZEt_u_a2c3HfiQr2pvoGl0C7bzOREqpZTgXsoQ0T1qnPXeQZWvXb3S7icnON9pzP805juN-UFjX7o5_O-KFZTHyr-0HrjeAzUAHGMhFWeYsV_S7XZf</recordid><startdate>20150801</startdate><enddate>20150801</enddate><creator>Long Zhang</creator><creator>Kang Li</creator><creator>Er-Wei Bai</creator><creator>Irwin, George W.</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>20150801</creationdate><title>Two-Stage Orthogonal Least Squares Methods for Neural Network Construction</title><author>Long Zhang ; Kang Li ; Er-Wei Bai ; Irwin, George W.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c393t-fc5e96e3728e097182cd810ac0356bc2e146b3f0b8b486915ab13107dc5bc9db3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2015</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Backward model refinement</topic><topic>computational complexity</topic><topic>Computational modeling</topic><topic>Cost function</topic><topic>forward selection</topic><topic>Least squares methods</topic><topic>Least-Squares Analysis</topic><topic>linear-in-the-parameters model</topic><topic>Matching pursuit algorithms</topic><topic>Models, Theoretical</topic><topic>Neural networks</topic><topic>Neural Networks, Computer</topic><topic>Numerical models</topic><topic>orthogonal least square (OLS)</topic><topic>Vectors</topic><toplevel>online_resources</toplevel><creatorcontrib>Long Zhang</creatorcontrib><creatorcontrib>Kang Li</creatorcontrib><creatorcontrib>Er-Wei Bai</creatorcontrib><creatorcontrib>Irwin, George W.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Long Zhang</au><au>Kang Li</au><au>Er-Wei Bai</au><au>Irwin, George W.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Two-Stage Orthogonal Least Squares Methods for Neural Network Construction</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2015-08-01</date><risdate>2015</risdate><volume>26</volume><issue>8</issue><spage>1608</spage><epage>1621</epage><pages>1608-1621</pages><issn>2162-237X</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>25222956</pmid><doi>10.1109/TNNLS.2014.2346399</doi><tpages>14</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2162-237X |
ispartof | IEEE transaction on neural networks and learning systems, 2015-08, Vol.26 (8), p.1608-1621 |
issn | 2162-237X 2162-2388 |
language | eng |
recordid | cdi_crossref_primary_10_1109_TNNLS_2014_2346399 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Artificial Intelligence Backward model refinement computational complexity Computational modeling Cost function forward selection Least squares methods Least-Squares Analysis linear-in-the-parameters model Matching pursuit algorithms Models, Theoretical Neural networks Neural Networks, Computer Numerical models orthogonal least square (OLS) Vectors |
title | Two-Stage Orthogonal Least Squares Methods for Neural Network Construction |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T14%3A56%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Two-Stage%20Orthogonal%20Least%20Squares%20Methods%20for%20Neural%20Network%20Construction&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Long%20Zhang&rft.date=2015-08-01&rft.volume=26&rft.issue=8&rft.spage=1608&rft.epage=1621&rft.pages=1608-1621&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2014.2346399&rft_dat=%3Cproquest_RIE%3E1697221046%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1697221046&rft_id=info:pmid/25222956&rft_ieee_id=6895303&rfr_iscdi=true |