A new learning paradigm for random vector functional-link network: RVFL

In school, a teacher plays an important role in various classroom teaching patterns. Likewise to this human learning activity, the learning using privileged information (LUPI) paradigm provides additional information generated by the teacher to ’teach’ learning models during the training stage. Ther...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2020-02, Vol.122, p.94-105
Hauptverfasser: Zhang, Peng-Bo, Yang, Zhi-Xin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 105
container_issue
container_start_page 94
container_title Neural networks
container_volume 122
creator Zhang, Peng-Bo
Yang, Zhi-Xin
description In school, a teacher plays an important role in various classroom teaching patterns. Likewise to this human learning activity, the learning using privileged information (LUPI) paradigm provides additional information generated by the teacher to ’teach’ learning models during the training stage. Therefore, this novel learning paradigm is a typical Teacher–Student Interaction mechanism. This paper is the first to present a random vector functional link (RVFL) network based on the LUPI paradigm, called RVFL+. The novel RVFL+ incorporates the LUPI paradigm that can leverage additional source of information into the RVFL, which offers an alternative way to train the RVFL. Rather than simply combining two existing approaches, the newly-derived RVFL+ fills the gap between classical randomized neural networks and the newfashioned LUPI paradigm. Moreover, the proposed RVFL+ can perform in conjunction with the kernel trick for highly complicated nonlinear feature learning, termed KRVFL+. Furthermore, the statistical property of the proposed RVFL+ is investigated, and the authors present a sharp and high-quality generalization error bound based on the Rademacher complexity. Competitive experimental results on 14 real-world datasets illustrate the great effectiveness and efficiency of the novel RVFL+ and KRVFL+, which can achieve better generalization performance than state-of-the-art methods. •This paper presents a new RVFL+, which is an alternative way to train the RVFL.•The RVFL+ bridges the gap between randomized neural networks and the LUPI.•The KRVFL+ is also proposed in order to handle highly nonlinear relationships.•This paper provides a theoretical guarantee using the Rademacher complexity.•Performances are evaluated on 14 real datasets using state-of-the-art methods.
doi_str_mv 10.1016/j.neunet.2019.09.039
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2311658770</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0893608019303211</els_id><sourcerecordid>2311658770</sourcerecordid><originalsourceid>FETCH-LOGICAL-c362t-ac4c5367b17395257067bc2a7a2b5170cad05a29224e5b2a5cf483e7dfbd796a3</originalsourceid><addsrcrecordid>eNp9kE9Lw0AQxRdRbK1-A5EcvSTunySbeBCk2CoUBFGvy2YzKdsmu3U3afHbu6XVozAwM_DePOaH0DXBCcEkv1slBgYDfUIxKRMcipUnaEwKXsaUF_QUjXFRsjjHBR6hC-9XGOO8SNk5GjGSc56mdIzmj5GBXdSCdEabZbSRTtZ62UWNdZGTprZdtAXVh60ZjOq1NbKNW23WwdfvrFvfR2-fs8UlOmtk6-Hq2CfoY_b0Pn2OF6_zl-njIlYsp30sVaoylvOKcFZmNOM4zIpKLmmVEY6VrHEmaUlpCllFZaaatGDA66aqeZlLNkG3h7sbZ78G8L3otFfQttKAHbygjJA8KzjHQZoepMpZ7x00YuN0J923IFjsEYqVOCAUe4QCh2JlsN0cE4aqg_rP9MssCB4OAgh_bjU44ZUGo6DWLpAStdX_J_wAUWqDrw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2311658770</pqid></control><display><type>article</type><title>A new learning paradigm for random vector functional-link network: RVFL</title><source>MEDLINE</source><source>Elsevier ScienceDirect Journals Complete</source><creator>Zhang, Peng-Bo ; Yang, Zhi-Xin</creator><creatorcontrib>Zhang, Peng-Bo ; Yang, Zhi-Xin</creatorcontrib><description>In school, a teacher plays an important role in various classroom teaching patterns. Likewise to this human learning activity, the learning using privileged information (LUPI) paradigm provides additional information generated by the teacher to ’teach’ learning models during the training stage. Therefore, this novel learning paradigm is a typical Teacher–Student Interaction mechanism. This paper is the first to present a random vector functional link (RVFL) network based on the LUPI paradigm, called RVFL+. The novel RVFL+ incorporates the LUPI paradigm that can leverage additional source of information into the RVFL, which offers an alternative way to train the RVFL. Rather than simply combining two existing approaches, the newly-derived RVFL+ fills the gap between classical randomized neural networks and the newfashioned LUPI paradigm. Moreover, the proposed RVFL+ can perform in conjunction with the kernel trick for highly complicated nonlinear feature learning, termed KRVFL+. Furthermore, the statistical property of the proposed RVFL+ is investigated, and the authors present a sharp and high-quality generalization error bound based on the Rademacher complexity. Competitive experimental results on 14 real-world datasets illustrate the great effectiveness and efficiency of the novel RVFL+ and KRVFL+, which can achieve better generalization performance than state-of-the-art methods. •This paper presents a new RVFL+, which is an alternative way to train the RVFL.•The RVFL+ bridges the gap between randomized neural networks and the LUPI.•The KRVFL+ is also proposed in order to handle highly nonlinear relationships.•This paper provides a theoretical guarantee using the Rademacher complexity.•Performances are evaluated on 14 real datasets using state-of-the-art methods.</description><identifier>ISSN: 0893-6080</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/j.neunet.2019.09.039</identifier><identifier>PMID: 31677442</identifier><language>eng</language><publisher>United States: Elsevier Ltd</publisher><subject>KRVFL ; Learning using privileged information ; Random vector functional link networks ; RVFL ; Supervised Machine Learning ; SVM ; The Rademacher complexity</subject><ispartof>Neural networks, 2020-02, Vol.122, p.94-105</ispartof><rights>2019 Elsevier Ltd</rights><rights>Copyright © 2019 Elsevier Ltd. All rights reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c362t-ac4c5367b17395257067bc2a7a2b5170cad05a29224e5b2a5cf483e7dfbd796a3</citedby><cites>FETCH-LOGICAL-c362t-ac4c5367b17395257067bc2a7a2b5170cad05a29224e5b2a5cf483e7dfbd796a3</cites><orcidid>0000-0001-9151-7758</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.neunet.2019.09.039$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,780,784,3550,27924,27925,45995</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/31677442$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Peng-Bo</creatorcontrib><creatorcontrib>Yang, Zhi-Xin</creatorcontrib><title>A new learning paradigm for random vector functional-link network: RVFL</title><title>Neural networks</title><addtitle>Neural Netw</addtitle><description>In school, a teacher plays an important role in various classroom teaching patterns. Likewise to this human learning activity, the learning using privileged information (LUPI) paradigm provides additional information generated by the teacher to ’teach’ learning models during the training stage. Therefore, this novel learning paradigm is a typical Teacher–Student Interaction mechanism. This paper is the first to present a random vector functional link (RVFL) network based on the LUPI paradigm, called RVFL+. The novel RVFL+ incorporates the LUPI paradigm that can leverage additional source of information into the RVFL, which offers an alternative way to train the RVFL. Rather than simply combining two existing approaches, the newly-derived RVFL+ fills the gap between classical randomized neural networks and the newfashioned LUPI paradigm. Moreover, the proposed RVFL+ can perform in conjunction with the kernel trick for highly complicated nonlinear feature learning, termed KRVFL+. Furthermore, the statistical property of the proposed RVFL+ is investigated, and the authors present a sharp and high-quality generalization error bound based on the Rademacher complexity. Competitive experimental results on 14 real-world datasets illustrate the great effectiveness and efficiency of the novel RVFL+ and KRVFL+, which can achieve better generalization performance than state-of-the-art methods. •This paper presents a new RVFL+, which is an alternative way to train the RVFL.•The RVFL+ bridges the gap between randomized neural networks and the LUPI.•The KRVFL+ is also proposed in order to handle highly nonlinear relationships.•This paper provides a theoretical guarantee using the Rademacher complexity.•Performances are evaluated on 14 real datasets using state-of-the-art methods.</description><subject>KRVFL</subject><subject>Learning using privileged information</subject><subject>Random vector functional link networks</subject><subject>RVFL</subject><subject>Supervised Machine Learning</subject><subject>SVM</subject><subject>The Rademacher complexity</subject><issn>0893-6080</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp9kE9Lw0AQxRdRbK1-A5EcvSTunySbeBCk2CoUBFGvy2YzKdsmu3U3afHbu6XVozAwM_DePOaH0DXBCcEkv1slBgYDfUIxKRMcipUnaEwKXsaUF_QUjXFRsjjHBR6hC-9XGOO8SNk5GjGSc56mdIzmj5GBXdSCdEabZbSRTtZ62UWNdZGTprZdtAXVh60ZjOq1NbKNW23WwdfvrFvfR2-fs8UlOmtk6-Hq2CfoY_b0Pn2OF6_zl-njIlYsp30sVaoylvOKcFZmNOM4zIpKLmmVEY6VrHEmaUlpCllFZaaatGDA66aqeZlLNkG3h7sbZ78G8L3otFfQttKAHbygjJA8KzjHQZoepMpZ7x00YuN0J923IFjsEYqVOCAUe4QCh2JlsN0cE4aqg_rP9MssCB4OAgh_bjU44ZUGo6DWLpAStdX_J_wAUWqDrw</recordid><startdate>202002</startdate><enddate>202002</enddate><creator>Zhang, Peng-Bo</creator><creator>Yang, Zhi-Xin</creator><general>Elsevier Ltd</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-9151-7758</orcidid></search><sort><creationdate>202002</creationdate><title>A new learning paradigm for random vector functional-link network: RVFL</title><author>Zhang, Peng-Bo ; Yang, Zhi-Xin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c362t-ac4c5367b17395257067bc2a7a2b5170cad05a29224e5b2a5cf483e7dfbd796a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>KRVFL</topic><topic>Learning using privileged information</topic><topic>Random vector functional link networks</topic><topic>RVFL</topic><topic>Supervised Machine Learning</topic><topic>SVM</topic><topic>The Rademacher complexity</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Peng-Bo</creatorcontrib><creatorcontrib>Yang, Zhi-Xin</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Peng-Bo</au><au>Yang, Zhi-Xin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A new learning paradigm for random vector functional-link network: RVFL</atitle><jtitle>Neural networks</jtitle><addtitle>Neural Netw</addtitle><date>2020-02</date><risdate>2020</risdate><volume>122</volume><spage>94</spage><epage>105</epage><pages>94-105</pages><issn>0893-6080</issn><eissn>1879-2782</eissn><abstract>In school, a teacher plays an important role in various classroom teaching patterns. Likewise to this human learning activity, the learning using privileged information (LUPI) paradigm provides additional information generated by the teacher to ’teach’ learning models during the training stage. Therefore, this novel learning paradigm is a typical Teacher–Student Interaction mechanism. This paper is the first to present a random vector functional link (RVFL) network based on the LUPI paradigm, called RVFL+. The novel RVFL+ incorporates the LUPI paradigm that can leverage additional source of information into the RVFL, which offers an alternative way to train the RVFL. Rather than simply combining two existing approaches, the newly-derived RVFL+ fills the gap between classical randomized neural networks and the newfashioned LUPI paradigm. Moreover, the proposed RVFL+ can perform in conjunction with the kernel trick for highly complicated nonlinear feature learning, termed KRVFL+. Furthermore, the statistical property of the proposed RVFL+ is investigated, and the authors present a sharp and high-quality generalization error bound based on the Rademacher complexity. Competitive experimental results on 14 real-world datasets illustrate the great effectiveness and efficiency of the novel RVFL+ and KRVFL+, which can achieve better generalization performance than state-of-the-art methods. •This paper presents a new RVFL+, which is an alternative way to train the RVFL.•The RVFL+ bridges the gap between randomized neural networks and the LUPI.•The KRVFL+ is also proposed in order to handle highly nonlinear relationships.•This paper provides a theoretical guarantee using the Rademacher complexity.•Performances are evaluated on 14 real datasets using state-of-the-art methods.</abstract><cop>United States</cop><pub>Elsevier Ltd</pub><pmid>31677442</pmid><doi>10.1016/j.neunet.2019.09.039</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-9151-7758</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0893-6080
ispartof Neural networks, 2020-02, Vol.122, p.94-105
issn 0893-6080
1879-2782
language eng
recordid cdi_proquest_miscellaneous_2311658770
source MEDLINE; Elsevier ScienceDirect Journals Complete
subjects KRVFL
Learning using privileged information
Random vector functional link networks
RVFL
Supervised Machine Learning
SVM
The Rademacher complexity
title A new learning paradigm for random vector functional-link network: RVFL
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T06%3A54%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20new%20learning%20paradigm%20for%20random%20vector%20functional-link%20network:%20RVFL&rft.jtitle=Neural%20networks&rft.au=Zhang,%20Peng-Bo&rft.date=2020-02&rft.volume=122&rft.spage=94&rft.epage=105&rft.pages=94-105&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/j.neunet.2019.09.039&rft_dat=%3Cproquest_cross%3E2311658770%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2311658770&rft_id=info:pmid/31677442&rft_els_id=S0893608019303211&rfr_iscdi=true