Dynamic Network Structure: Doubly Stacking Broad Learning Systems With Residuals and Simpler Linear Model Transmission
While broad learning system (BLS) has demonstrated its distinctive performance with its solid theoretical foundation, strong generalization capability and fast learning speed, a relatively large network structure ( i.e ., a large number of enhancement nodes) is often required to assure satisfactory...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on emerging topics in computational intelligence 2022-12, Vol.6 (6), p.1378-1395 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1395 |
---|---|
container_issue | 6 |
container_start_page | 1378 |
container_title | IEEE transactions on emerging topics in computational intelligence |
container_volume | 6 |
creator | Xie, Runshan Vong, Chi-Man Chen, C. L. Philip Wang, Shitong |
description | While broad learning system (BLS) has demonstrated its distinctive performance with its solid theoretical foundation, strong generalization capability and fast learning speed, a relatively large network structure ( i.e ., a large number of enhancement nodes) is often required to assure satisfactory performance especially for challenging datasets, which may inevitably deteriorate its generalization capability due to overfitting phenomenon. In this study, by stacking several broad learning sub-systems, a doubly s tacked b road l earning s ystem through r esiduals and s impler linear model t ransmission, called RST&BLS, is presented to enhance BLS performance in network size, generalization capability and learning speed. With the use of shared feature nodes and simpler linear models between stacked layers, the design methodology of RST&BLS is motivated by three facets: 1) analogous to human-like neural behaviors that some common neuron blocks are always activated to deal with the correlated problems, an enhanced ensemble of BLS sub-systems is resulted; 2) rather than a complicated model, human prefers a simple model (as a component of the final model); 3) extra overfitting-avoidance capability between shared feature nodes and the remaining hidden nodes from the second layer can be assured in theory. Except for performance advantage over the comparative methods, experimental results on twenty-one classification/regression datasets indicate the superiority of RST&BLS in terms of smaller network structure ( i.e., fewer adjustable parameters), better generalization capability and fewer computational burdens. |
doi_str_mv | 10.1109/TETCI.2022.3146983 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2742703123</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9715148</ieee_id><sourcerecordid>2742703123</sourcerecordid><originalsourceid>FETCH-LOGICAL-c295t-8adba91f6a49507d9ba89fdee73fe9b21069b7a0114d247432cc78f7145821e13</originalsourceid><addsrcrecordid>eNpNkMlOwzAQhiMEElXpC8DFEucUj-3UMTdoWSoVkGgR3CInmYDbLMVOQH17XFohTrPo_2f5guAU6BCAqovFzWI8HTLK2JCDGKmYHwQ9JiSELI7eDv_lx8HAuSWllKkIeCR6wddkU-vKZOQR2-_Grsi8tV3WdhYvyaTp0nLjOzpbmfqdXNtG52SG2tbbcr5xLVaOvJr2gzyjM3mnS0d0nZO5qdYlWjIztVeThybHkiysrl1lnDNNfRIcFV6Mg33sBy-3_ov7cPZ0Nx1fzcLMX9iGsc5TraAYaaEiKnOV6lgVOaLkBaqUAR2pVGoKIHL_pOAsy2RcSBBRzACB94Pz3dy1bT47dG2ybDpb-5UJk4JJyoFxr2I7VWYb5ywWydqaSttNAjTZIk5-ESdbxMkesTed7UwGEf8MSkIEIuY_LCp44Q</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2742703123</pqid></control><display><type>article</type><title>Dynamic Network Structure: Doubly Stacking Broad Learning Systems With Residuals and Simpler Linear Model Transmission</title><source>IEEE Electronic Library (IEL)</source><creator>Xie, Runshan ; Vong, Chi-Man ; Chen, C. L. Philip ; Wang, Shitong</creator><creatorcontrib>Xie, Runshan ; Vong, Chi-Man ; Chen, C. L. Philip ; Wang, Shitong</creatorcontrib><description>While broad learning system (BLS) has demonstrated its distinctive performance with its solid theoretical foundation, strong generalization capability and fast learning speed, a relatively large network structure ( i.e ., a large number of enhancement nodes) is often required to assure satisfactory performance especially for challenging datasets, which may inevitably deteriorate its generalization capability due to overfitting phenomenon. In this study, by stacking several broad learning sub-systems, a doubly s tacked b road l earning s ystem through r esiduals and s impler linear model t ransmission, called RST&BLS, is presented to enhance BLS performance in network size, generalization capability and learning speed. With the use of shared feature nodes and simpler linear models between stacked layers, the design methodology of RST&BLS is motivated by three facets: 1) analogous to human-like neural behaviors that some common neuron blocks are always activated to deal with the correlated problems, an enhanced ensemble of BLS sub-systems is resulted; 2) rather than a complicated model, human prefers a simple model (as a component of the final model); 3) extra overfitting-avoidance capability between shared feature nodes and the remaining hidden nodes from the second layer can be assured in theory. Except for performance advantage over the comparative methods, experimental results on twenty-one classification/regression datasets indicate the superiority of RST&BLS in terms of smaller network structure ( i.e., fewer adjustable parameters), better generalization capability and fewer computational burdens.</description><identifier>ISSN: 2471-285X</identifier><identifier>EISSN: 2471-285X</identifier><identifier>DOI: 10.1109/TETCI.2022.3146983</identifier><identifier>CODEN: ITETCU</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Broad learning system (BLS) ; co-adaption ; Computational intelligence ; Computational modeling ; Correlation ; Data models ; Datasets ; generalization ; Learning ; learning algorithms ; Learning systems ; Nodes ; overfitting ; simple linear models ; stacked structure ; Stacking ; Training data</subject><ispartof>IEEE transactions on emerging topics in computational intelligence, 2022-12, Vol.6 (6), p.1378-1395</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c295t-8adba91f6a49507d9ba89fdee73fe9b21069b7a0114d247432cc78f7145821e13</citedby><cites>FETCH-LOGICAL-c295t-8adba91f6a49507d9ba89fdee73fe9b21069b7a0114d247432cc78f7145821e13</cites><orcidid>0000-0001-5451-7230 ; 0000-0001-7997-8279 ; 0000-0002-8393-6554 ; 0000-0003-2182-5979</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9715148$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9715148$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Xie, Runshan</creatorcontrib><creatorcontrib>Vong, Chi-Man</creatorcontrib><creatorcontrib>Chen, C. L. Philip</creatorcontrib><creatorcontrib>Wang, Shitong</creatorcontrib><title>Dynamic Network Structure: Doubly Stacking Broad Learning Systems With Residuals and Simpler Linear Model Transmission</title><title>IEEE transactions on emerging topics in computational intelligence</title><addtitle>TETCI</addtitle><description>While broad learning system (BLS) has demonstrated its distinctive performance with its solid theoretical foundation, strong generalization capability and fast learning speed, a relatively large network structure ( i.e ., a large number of enhancement nodes) is often required to assure satisfactory performance especially for challenging datasets, which may inevitably deteriorate its generalization capability due to overfitting phenomenon. In this study, by stacking several broad learning sub-systems, a doubly s tacked b road l earning s ystem through r esiduals and s impler linear model t ransmission, called RST&BLS, is presented to enhance BLS performance in network size, generalization capability and learning speed. With the use of shared feature nodes and simpler linear models between stacked layers, the design methodology of RST&BLS is motivated by three facets: 1) analogous to human-like neural behaviors that some common neuron blocks are always activated to deal with the correlated problems, an enhanced ensemble of BLS sub-systems is resulted; 2) rather than a complicated model, human prefers a simple model (as a component of the final model); 3) extra overfitting-avoidance capability between shared feature nodes and the remaining hidden nodes from the second layer can be assured in theory. Except for performance advantage over the comparative methods, experimental results on twenty-one classification/regression datasets indicate the superiority of RST&BLS in terms of smaller network structure ( i.e., fewer adjustable parameters), better generalization capability and fewer computational burdens.</description><subject>Broad learning system (BLS)</subject><subject>co-adaption</subject><subject>Computational intelligence</subject><subject>Computational modeling</subject><subject>Correlation</subject><subject>Data models</subject><subject>Datasets</subject><subject>generalization</subject><subject>Learning</subject><subject>learning algorithms</subject><subject>Learning systems</subject><subject>Nodes</subject><subject>overfitting</subject><subject>simple linear models</subject><subject>stacked structure</subject><subject>Stacking</subject><subject>Training data</subject><issn>2471-285X</issn><issn>2471-285X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkMlOwzAQhiMEElXpC8DFEucUj-3UMTdoWSoVkGgR3CInmYDbLMVOQH17XFohTrPo_2f5guAU6BCAqovFzWI8HTLK2JCDGKmYHwQ9JiSELI7eDv_lx8HAuSWllKkIeCR6wddkU-vKZOQR2-_Grsi8tV3WdhYvyaTp0nLjOzpbmfqdXNtG52SG2tbbcr5xLVaOvJr2gzyjM3mnS0d0nZO5qdYlWjIztVeThybHkiysrl1lnDNNfRIcFV6Mg33sBy-3_ov7cPZ0Nx1fzcLMX9iGsc5TraAYaaEiKnOV6lgVOaLkBaqUAR2pVGoKIHL_pOAsy2RcSBBRzACB94Pz3dy1bT47dG2ybDpb-5UJk4JJyoFxr2I7VWYb5ywWydqaSttNAjTZIk5-ESdbxMkesTed7UwGEf8MSkIEIuY_LCp44Q</recordid><startdate>20221201</startdate><enddate>20221201</enddate><creator>Xie, Runshan</creator><creator>Vong, Chi-Man</creator><creator>Chen, C. L. Philip</creator><creator>Wang, Shitong</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0001-5451-7230</orcidid><orcidid>https://orcid.org/0000-0001-7997-8279</orcidid><orcidid>https://orcid.org/0000-0002-8393-6554</orcidid><orcidid>https://orcid.org/0000-0003-2182-5979</orcidid></search><sort><creationdate>20221201</creationdate><title>Dynamic Network Structure: Doubly Stacking Broad Learning Systems With Residuals and Simpler Linear Model Transmission</title><author>Xie, Runshan ; Vong, Chi-Man ; Chen, C. L. Philip ; Wang, Shitong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c295t-8adba91f6a49507d9ba89fdee73fe9b21069b7a0114d247432cc78f7145821e13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Broad learning system (BLS)</topic><topic>co-adaption</topic><topic>Computational intelligence</topic><topic>Computational modeling</topic><topic>Correlation</topic><topic>Data models</topic><topic>Datasets</topic><topic>generalization</topic><topic>Learning</topic><topic>learning algorithms</topic><topic>Learning systems</topic><topic>Nodes</topic><topic>overfitting</topic><topic>simple linear models</topic><topic>stacked structure</topic><topic>Stacking</topic><topic>Training data</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xie, Runshan</creatorcontrib><creatorcontrib>Vong, Chi-Man</creatorcontrib><creatorcontrib>Chen, C. L. Philip</creatorcontrib><creatorcontrib>Wang, Shitong</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on emerging topics in computational intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Xie, Runshan</au><au>Vong, Chi-Man</au><au>Chen, C. L. Philip</au><au>Wang, Shitong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Dynamic Network Structure: Doubly Stacking Broad Learning Systems With Residuals and Simpler Linear Model Transmission</atitle><jtitle>IEEE transactions on emerging topics in computational intelligence</jtitle><stitle>TETCI</stitle><date>2022-12-01</date><risdate>2022</risdate><volume>6</volume><issue>6</issue><spage>1378</spage><epage>1395</epage><pages>1378-1395</pages><issn>2471-285X</issn><eissn>2471-285X</eissn><coden>ITETCU</coden><abstract>While broad learning system (BLS) has demonstrated its distinctive performance with its solid theoretical foundation, strong generalization capability and fast learning speed, a relatively large network structure ( i.e ., a large number of enhancement nodes) is often required to assure satisfactory performance especially for challenging datasets, which may inevitably deteriorate its generalization capability due to overfitting phenomenon. In this study, by stacking several broad learning sub-systems, a doubly s tacked b road l earning s ystem through r esiduals and s impler linear model t ransmission, called RST&BLS, is presented to enhance BLS performance in network size, generalization capability and learning speed. With the use of shared feature nodes and simpler linear models between stacked layers, the design methodology of RST&BLS is motivated by three facets: 1) analogous to human-like neural behaviors that some common neuron blocks are always activated to deal with the correlated problems, an enhanced ensemble of BLS sub-systems is resulted; 2) rather than a complicated model, human prefers a simple model (as a component of the final model); 3) extra overfitting-avoidance capability between shared feature nodes and the remaining hidden nodes from the second layer can be assured in theory. Except for performance advantage over the comparative methods, experimental results on twenty-one classification/regression datasets indicate the superiority of RST&BLS in terms of smaller network structure ( i.e., fewer adjustable parameters), better generalization capability and fewer computational burdens.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TETCI.2022.3146983</doi><tpages>18</tpages><orcidid>https://orcid.org/0000-0001-5451-7230</orcidid><orcidid>https://orcid.org/0000-0001-7997-8279</orcidid><orcidid>https://orcid.org/0000-0002-8393-6554</orcidid><orcidid>https://orcid.org/0000-0003-2182-5979</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2471-285X |
ispartof | IEEE transactions on emerging topics in computational intelligence, 2022-12, Vol.6 (6), p.1378-1395 |
issn | 2471-285X 2471-285X |
language | eng |
recordid | cdi_proquest_journals_2742703123 |
source | IEEE Electronic Library (IEL) |
subjects | Broad learning system (BLS) co-adaption Computational intelligence Computational modeling Correlation Data models Datasets generalization Learning learning algorithms Learning systems Nodes overfitting simple linear models stacked structure Stacking Training data |
title | Dynamic Network Structure: Doubly Stacking Broad Learning Systems With Residuals and Simpler Linear Model Transmission |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T23%3A44%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Dynamic%20Network%20Structure:%20Doubly%20Stacking%20Broad%20Learning%20Systems%20With%20Residuals%20and%20Simpler%20Linear%20Model%20Transmission&rft.jtitle=IEEE%20transactions%20on%20emerging%20topics%20in%20computational%20intelligence&rft.au=Xie,%20Runshan&rft.date=2022-12-01&rft.volume=6&rft.issue=6&rft.spage=1378&rft.epage=1395&rft.pages=1378-1395&rft.issn=2471-285X&rft.eissn=2471-285X&rft.coden=ITETCU&rft_id=info:doi/10.1109/TETCI.2022.3146983&rft_dat=%3Cproquest_RIE%3E2742703123%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2742703123&rft_id=info:pmid/&rft_ieee_id=9715148&rfr_iscdi=true |