A Fast Reduced Kernel Extreme Learning Machine

In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2016-04, Vol.76, p.29-38
Hauptverfasser: Deng, Wan-Yu, Ong, Yew-Soon, Zheng, Qing-Hua
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 38
container_issue
container_start_page 29
container_title Neural networks
container_volume 76
creator Deng, Wan-Yu
Ong, Yew-Soon
Zheng, Qing-Hua
description In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.
doi_str_mv 10.1016/j.neunet.2015.10.006
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1808084702</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0893608015002063</els_id><sourcerecordid>1771725938</sourcerecordid><originalsourceid>FETCH-LOGICAL-c474t-c5ad4fa275ca20170812bee939ae76ec6a3463358d5a8c58cc1ec795ff661d873</originalsourceid><addsrcrecordid>eNqNkE1Lw0AQhhdRbK3-A5EcvSTuR_YjF6FIq2JFED0v281EtyRp3U1E_71bUj2qp4HhmXlnHoROCc4IJuJilbXQt9BlFBMeWxnGYg-NiZJFSqWi-2iMVcFSgRUeoaMQVjgSKmeHaESFooXAfIyyaTI3oUseoewtlMkd-BbqZPbReWggWYDxrWtfkntjX10Lx-igMnWAk12doOf57OnqJl08XN9eTRepzWXepZabMq8MldyaeJ7EitAlQMEKA1KAFYblgjGuSm6U5cpaAlYWvKqEIKWSbILOh70bv37rIXS6ccFCXZsW1n3QRMWvVC4x_RuVBY7BivN_oJJIygumIpoPqPXrEDxUeuNdY_ynJlhv_euVHvzrrf9tN9qNY2e7hH7ZQPkz9C08ApcDANHeuwOvg3XQRvXOg-10uXa_J3wBL96ViQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1771725938</pqid></control><display><type>article</type><title>A Fast Reduced Kernel Extreme Learning Machine</title><source>MEDLINE</source><source>Access via ScienceDirect (Elsevier)</source><creator>Deng, Wan-Yu ; Ong, Yew-Soon ; Zheng, Qing-Hua</creator><creatorcontrib>Deng, Wan-Yu ; Ong, Yew-Soon ; Zheng, Qing-Hua</creatorcontrib><description>In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.</description><identifier>ISSN: 0893-6080</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/j.neunet.2015.10.006</identifier><identifier>PMID: 26829605</identifier><language>eng</language><publisher>United States: Elsevier Ltd</publisher><subject>Algorithms ; Cost engineering ; Extreme learning machine ; Kernel method ; Kernels ; Machine Learning ; Mathematical analysis ; Neural networks ; RBF network ; Regression ; Support Vector Machine ; Support vector machines ; Vectors (mathematics)</subject><ispartof>Neural networks, 2016-04, Vol.76, p.29-38</ispartof><rights>2015 Elsevier Ltd</rights><rights>Copyright © 2015 Elsevier Ltd. All rights reserved.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c474t-c5ad4fa275ca20170812bee939ae76ec6a3463358d5a8c58cc1ec795ff661d873</citedby><cites>FETCH-LOGICAL-c474t-c5ad4fa275ca20170812bee939ae76ec6a3463358d5a8c58cc1ec795ff661d873</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.neunet.2015.10.006$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>315,781,785,3551,27928,27929,45999</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/26829605$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Deng, Wan-Yu</creatorcontrib><creatorcontrib>Ong, Yew-Soon</creatorcontrib><creatorcontrib>Zheng, Qing-Hua</creatorcontrib><title>A Fast Reduced Kernel Extreme Learning Machine</title><title>Neural networks</title><addtitle>Neural Netw</addtitle><description>In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.</description><subject>Algorithms</subject><subject>Cost engineering</subject><subject>Extreme learning machine</subject><subject>Kernel method</subject><subject>Kernels</subject><subject>Machine Learning</subject><subject>Mathematical analysis</subject><subject>Neural networks</subject><subject>RBF network</subject><subject>Regression</subject><subject>Support Vector Machine</subject><subject>Support vector machines</subject><subject>Vectors (mathematics)</subject><issn>0893-6080</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNqNkE1Lw0AQhhdRbK3-A5EcvSTuR_YjF6FIq2JFED0v281EtyRp3U1E_71bUj2qp4HhmXlnHoROCc4IJuJilbXQt9BlFBMeWxnGYg-NiZJFSqWi-2iMVcFSgRUeoaMQVjgSKmeHaESFooXAfIyyaTI3oUseoewtlMkd-BbqZPbReWggWYDxrWtfkntjX10Lx-igMnWAk12doOf57OnqJl08XN9eTRepzWXepZabMq8MldyaeJ7EitAlQMEKA1KAFYblgjGuSm6U5cpaAlYWvKqEIKWSbILOh70bv37rIXS6ccFCXZsW1n3QRMWvVC4x_RuVBY7BivN_oJJIygumIpoPqPXrEDxUeuNdY_ynJlhv_euVHvzrrf9tN9qNY2e7hH7ZQPkz9C08ApcDANHeuwOvg3XQRvXOg-10uXa_J3wBL96ViQ</recordid><startdate>201604</startdate><enddate>201604</enddate><creator>Deng, Wan-Yu</creator><creator>Ong, Yew-Soon</creator><creator>Zheng, Qing-Hua</creator><general>Elsevier Ltd</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>7TK</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>201604</creationdate><title>A Fast Reduced Kernel Extreme Learning Machine</title><author>Deng, Wan-Yu ; Ong, Yew-Soon ; Zheng, Qing-Hua</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c474t-c5ad4fa275ca20170812bee939ae76ec6a3463358d5a8c58cc1ec795ff661d873</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>Algorithms</topic><topic>Cost engineering</topic><topic>Extreme learning machine</topic><topic>Kernel method</topic><topic>Kernels</topic><topic>Machine Learning</topic><topic>Mathematical analysis</topic><topic>Neural networks</topic><topic>RBF network</topic><topic>Regression</topic><topic>Support Vector Machine</topic><topic>Support vector machines</topic><topic>Vectors (mathematics)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Deng, Wan-Yu</creatorcontrib><creatorcontrib>Ong, Yew-Soon</creatorcontrib><creatorcontrib>Zheng, Qing-Hua</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>Neurosciences Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Deng, Wan-Yu</au><au>Ong, Yew-Soon</au><au>Zheng, Qing-Hua</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Fast Reduced Kernel Extreme Learning Machine</atitle><jtitle>Neural networks</jtitle><addtitle>Neural Netw</addtitle><date>2016-04</date><risdate>2016</risdate><volume>76</volume><spage>29</spage><epage>38</epage><pages>29-38</pages><issn>0893-6080</issn><eissn>1879-2782</eissn><abstract>In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.</abstract><cop>United States</cop><pub>Elsevier Ltd</pub><pmid>26829605</pmid><doi>10.1016/j.neunet.2015.10.006</doi><tpages>10</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0893-6080
ispartof Neural networks, 2016-04, Vol.76, p.29-38
issn 0893-6080
1879-2782
language eng
recordid cdi_proquest_miscellaneous_1808084702
source MEDLINE; Access via ScienceDirect (Elsevier)
subjects Algorithms
Cost engineering
Extreme learning machine
Kernel method
Kernels
Machine Learning
Mathematical analysis
Neural networks
RBF network
Regression
Support Vector Machine
Support vector machines
Vectors (mathematics)
title A Fast Reduced Kernel Extreme Learning Machine
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-17T06%3A16%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Fast%20Reduced%20Kernel%20Extreme%20Learning%20Machine&rft.jtitle=Neural%20networks&rft.au=Deng,%20Wan-Yu&rft.date=2016-04&rft.volume=76&rft.spage=29&rft.epage=38&rft.pages=29-38&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/j.neunet.2015.10.006&rft_dat=%3Cproquest_cross%3E1771725938%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1771725938&rft_id=info:pmid/26829605&rft_els_id=S0893608015002063&rfr_iscdi=true