Dynamic Network Structure: Doubly Stacking Broad Learning Systems With Residuals and Simpler Linear Model Transmission
While broad learning system (BLS) has demonstrated its distinctive performance with its solid theoretical foundation, strong generalization capability and fast learning speed, a relatively large network structure ( i.e ., a large number of enhancement nodes) is often required to assure satisfactory...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on emerging topics in computational intelligence 2022-12, Vol.6 (6), p.1378-1395 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | While broad learning system (BLS) has demonstrated its distinctive performance with its solid theoretical foundation, strong generalization capability and fast learning speed, a relatively large network structure ( i.e ., a large number of enhancement nodes) is often required to assure satisfactory performance especially for challenging datasets, which may inevitably deteriorate its generalization capability due to overfitting phenomenon. In this study, by stacking several broad learning sub-systems, a doubly s tacked b road l earning s ystem through r esiduals and s impler linear model t ransmission, called RST&BLS, is presented to enhance BLS performance in network size, generalization capability and learning speed. With the use of shared feature nodes and simpler linear models between stacked layers, the design methodology of RST&BLS is motivated by three facets: 1) analogous to human-like neural behaviors that some common neuron blocks are always activated to deal with the correlated problems, an enhanced ensemble of BLS sub-systems is resulted; 2) rather than a complicated model, human prefers a simple model (as a component of the final model); 3) extra overfitting-avoidance capability between shared feature nodes and the remaining hidden nodes from the second layer can be assured in theory. Except for performance advantage over the comparative methods, experimental results on twenty-one classification/regression datasets indicate the superiority of RST&BLS in terms of smaller network structure ( i.e., fewer adjustable parameters), better generalization capability and fewer computational burdens. |
---|---|
ISSN: | 2471-285X 2471-285X |
DOI: | 10.1109/TETCI.2022.3146983 |