Deep Network Based on Stacked Orthogonal Convex Incremental ELM Autoencoders

Extreme learning machine (ELM) as an emerging technology has recently attracted many researchers’ interest due to its fast learning speed and state-of-the-art generalization ability in the implementation. Meanwhile, the incremental extreme learning machine (I-ELM) based on incremental learning algor...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Mathematical Problems in Engineering 2016-01, Vol.2016 (2016), p.883-899
Hauptverfasser: Wang, Chao, Gu, Shusheng, Wang, Jianhui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 899
container_issue 2016
container_start_page 883
container_title Mathematical Problems in Engineering
container_volume 2016
creator Wang, Chao
Gu, Shusheng
Wang, Jianhui
description Extreme learning machine (ELM) as an emerging technology has recently attracted many researchers’ interest due to its fast learning speed and state-of-the-art generalization ability in the implementation. Meanwhile, the incremental extreme learning machine (I-ELM) based on incremental learning algorithm was proposed which outperforms many popular learning algorithms. However, the incremental algorithms with ELM do not recalculate the output weights of all the existing nodes when a new node is added and cannot obtain the least-squares solution of output weight vectors. In this paper, we propose orthogonal convex incremental learning machine (OCI-ELM) with Gram-Schmidt orthogonalization method and Barron’s convex optimization learning method to solve the nonconvex optimization problem and least-squares solution problem, and then we give the rigorous proofs in theory. Moreover, in this paper, we propose a deep architecture based on stacked OCI-ELM autoencoders according to stacked generalization philosophy for solving large and complex data problems. The experimental results verified with both UCI datasets and large datasets demonstrate that the deep network based on stacked OCI-ELM autoencoders (DOC-IELM-AEs) outperforms the other methods mentioned in the paper with better performance on regression and classification problems.
doi_str_mv 10.1155/2016/1649486
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1835655931</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><airiti_id>P20161117002_201612_201711200011_201711200011_883_899</airiti_id><sourcerecordid>4112870781</sourcerecordid><originalsourceid>FETCH-LOGICAL-a532t-1c40d75318732d5eca4933ad96d201c3b6d5347b0737cd201157c965fdcd3e303</originalsourceid><addsrcrecordid>eNqF0UtP3DAQAOAIFQkK3DijSL1UghRPJn7kCFtokVIWCZC4RcaeZbPsxoudLfDv65CVEL3gy4xHn0aecZLsA_sBwPlxzkAcgyjKQomNZBu4wIxDIb_EnOVFBjnebSVfQ5gxlgMHtZ1UP4mW6SV1z84_pqc6kE1dm1532jzGdOy7qXtwrZ6nI9f-pZf0ojWeFtR2sXRW_UlPVp2j1jhLPuwmmxM9D7S3jjvJ7fnZzeh3Vo1_XYxOqkxzzLsMTMGs5AhKYm45GV2UiNqWwsYJDN4Ly7GQ90yiNH0JuDSl4BNrLBIy3Em-D32X3j2tKHT1ogmG5nPdkluFGhRywXmJEOm3_-jMrXycp1csFyVXqm94NCjjXQieJvXSNwvtX2tgdb_aul9tvV5t5IcDnzat1c_NZ_pg0BQNTfS7jkeKHlQD0I1vuub9hVd9nx7F73rrCW9BAuSMxbV8vCiFtSpL_AfXZJRB</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1802695880</pqid></control><display><type>article</type><title>Deep Network Based on Stacked Orthogonal Convex Incremental ELM Autoencoders</title><source>Wiley-Blackwell Open Access Titles</source><source>EZB-FREE-00999 freely available EZB journals</source><source>Alma/SFX Local Collection</source><creator>Wang, Chao ; Gu, Shusheng ; Wang, Jianhui</creator><contributor>Senhadji, Lotfi</contributor><creatorcontrib>Wang, Chao ; Gu, Shusheng ; Wang, Jianhui ; Senhadji, Lotfi</creatorcontrib><description>Extreme learning machine (ELM) as an emerging technology has recently attracted many researchers’ interest due to its fast learning speed and state-of-the-art generalization ability in the implementation. Meanwhile, the incremental extreme learning machine (I-ELM) based on incremental learning algorithm was proposed which outperforms many popular learning algorithms. However, the incremental algorithms with ELM do not recalculate the output weights of all the existing nodes when a new node is added and cannot obtain the least-squares solution of output weight vectors. In this paper, we propose orthogonal convex incremental learning machine (OCI-ELM) with Gram-Schmidt orthogonalization method and Barron’s convex optimization learning method to solve the nonconvex optimization problem and least-squares solution problem, and then we give the rigorous proofs in theory. Moreover, in this paper, we propose a deep architecture based on stacked OCI-ELM autoencoders according to stacked generalization philosophy for solving large and complex data problems. The experimental results verified with both UCI datasets and large datasets demonstrate that the deep network based on stacked OCI-ELM autoencoders (DOC-IELM-AEs) outperforms the other methods mentioned in the paper with better performance on regression and classification problems.</description><identifier>ISSN: 1024-123X</identifier><identifier>EISSN: 1563-5147</identifier><identifier>DOI: 10.1155/2016/1649486</identifier><language>eng</language><publisher>Cairo, Egypt: Hindawi Limiteds</publisher><subject>Accuracy ; Algorithms ; Architectural engineering ; Artificial neural networks ; Convexity ; Datasets ; Engineering ; Learning ; Least squares ; Least squares method ; Machine learning ; Mathematical analysis ; Networks ; Neural networks ; Optimization ; Regression</subject><ispartof>Mathematical Problems in Engineering, 2016-01, Vol.2016 (2016), p.883-899</ispartof><rights>Copyright © 2016 Chao Wang et al.</rights><rights>Copyright © 2016 Chao Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a532t-1c40d75318732d5eca4933ad96d201c3b6d5347b0737cd201157c965fdcd3e303</citedby><cites>FETCH-LOGICAL-a532t-1c40d75318732d5eca4933ad96d201c3b6d5347b0737cd201157c965fdcd3e303</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27922,27923</link.rule.ids></links><search><contributor>Senhadji, Lotfi</contributor><creatorcontrib>Wang, Chao</creatorcontrib><creatorcontrib>Gu, Shusheng</creatorcontrib><creatorcontrib>Wang, Jianhui</creatorcontrib><title>Deep Network Based on Stacked Orthogonal Convex Incremental ELM Autoencoders</title><title>Mathematical Problems in Engineering</title><description>Extreme learning machine (ELM) as an emerging technology has recently attracted many researchers’ interest due to its fast learning speed and state-of-the-art generalization ability in the implementation. Meanwhile, the incremental extreme learning machine (I-ELM) based on incremental learning algorithm was proposed which outperforms many popular learning algorithms. However, the incremental algorithms with ELM do not recalculate the output weights of all the existing nodes when a new node is added and cannot obtain the least-squares solution of output weight vectors. In this paper, we propose orthogonal convex incremental learning machine (OCI-ELM) with Gram-Schmidt orthogonalization method and Barron’s convex optimization learning method to solve the nonconvex optimization problem and least-squares solution problem, and then we give the rigorous proofs in theory. Moreover, in this paper, we propose a deep architecture based on stacked OCI-ELM autoencoders according to stacked generalization philosophy for solving large and complex data problems. The experimental results verified with both UCI datasets and large datasets demonstrate that the deep network based on stacked OCI-ELM autoencoders (DOC-IELM-AEs) outperforms the other methods mentioned in the paper with better performance on regression and classification problems.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Architectural engineering</subject><subject>Artificial neural networks</subject><subject>Convexity</subject><subject>Datasets</subject><subject>Engineering</subject><subject>Learning</subject><subject>Least squares</subject><subject>Least squares method</subject><subject>Machine learning</subject><subject>Mathematical analysis</subject><subject>Networks</subject><subject>Neural networks</subject><subject>Optimization</subject><subject>Regression</subject><issn>1024-123X</issn><issn>1563-5147</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><sourceid>RHX</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNqF0UtP3DAQAOAIFQkK3DijSL1UghRPJn7kCFtokVIWCZC4RcaeZbPsxoudLfDv65CVEL3gy4xHn0aecZLsA_sBwPlxzkAcgyjKQomNZBu4wIxDIb_EnOVFBjnebSVfQ5gxlgMHtZ1UP4mW6SV1z84_pqc6kE1dm1532jzGdOy7qXtwrZ6nI9f-pZf0ojWeFtR2sXRW_UlPVp2j1jhLPuwmmxM9D7S3jjvJ7fnZzeh3Vo1_XYxOqkxzzLsMTMGs5AhKYm45GV2UiNqWwsYJDN4Ly7GQ90yiNH0JuDSl4BNrLBIy3Em-D32X3j2tKHT1ogmG5nPdkluFGhRywXmJEOm3_-jMrXycp1csFyVXqm94NCjjXQieJvXSNwvtX2tgdb_aul9tvV5t5IcDnzat1c_NZ_pg0BQNTfS7jkeKHlQD0I1vuub9hVd9nx7F73rrCW9BAuSMxbV8vCiFtSpL_AfXZJRB</recordid><startdate>20160101</startdate><enddate>20160101</enddate><creator>Wang, Chao</creator><creator>Gu, Shusheng</creator><creator>Wang, Jianhui</creator><general>Hindawi Limiteds</general><general>Hindawi Publishing Corporation</general><general>Hindawi Limited</general><scope>188</scope><scope>ADJCN</scope><scope>AHFXO</scope><scope>RHU</scope><scope>RHW</scope><scope>RHX</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7TB</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>CWDGH</scope><scope>DWQXO</scope><scope>FR3</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>KR7</scope><scope>L6V</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20160101</creationdate><title>Deep Network Based on Stacked Orthogonal Convex Incremental ELM Autoencoders</title><author>Wang, Chao ; Gu, Shusheng ; Wang, Jianhui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a532t-1c40d75318732d5eca4933ad96d201c3b6d5347b0737cd201157c965fdcd3e303</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Architectural engineering</topic><topic>Artificial neural networks</topic><topic>Convexity</topic><topic>Datasets</topic><topic>Engineering</topic><topic>Learning</topic><topic>Least squares</topic><topic>Least squares method</topic><topic>Machine learning</topic><topic>Mathematical analysis</topic><topic>Networks</topic><topic>Neural networks</topic><topic>Optimization</topic><topic>Regression</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Chao</creatorcontrib><creatorcontrib>Gu, Shusheng</creatorcontrib><creatorcontrib>Wang, Jianhui</creatorcontrib><collection>Airiti Library</collection><collection>الدوريات العلمية والإحصائية - e-Marefa Academic and Statistical Periodicals</collection><collection>معرفة - المحتوى العربي الأكاديمي المتكامل - e-Marefa Academic Complete</collection><collection>Hindawi Publishing Complete</collection><collection>Hindawi Publishing Subscription Journals</collection><collection>Hindawi Publishing Open Access Journals</collection><collection>CrossRef</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>Middle East &amp; Africa Database</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Civil Engineering Abstracts</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><jtitle>Mathematical Problems in Engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Chao</au><au>Gu, Shusheng</au><au>Wang, Jianhui</au><au>Senhadji, Lotfi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Network Based on Stacked Orthogonal Convex Incremental ELM Autoencoders</atitle><jtitle>Mathematical Problems in Engineering</jtitle><date>2016-01-01</date><risdate>2016</risdate><volume>2016</volume><issue>2016</issue><spage>883</spage><epage>899</epage><pages>883-899</pages><issn>1024-123X</issn><eissn>1563-5147</eissn><abstract>Extreme learning machine (ELM) as an emerging technology has recently attracted many researchers’ interest due to its fast learning speed and state-of-the-art generalization ability in the implementation. Meanwhile, the incremental extreme learning machine (I-ELM) based on incremental learning algorithm was proposed which outperforms many popular learning algorithms. However, the incremental algorithms with ELM do not recalculate the output weights of all the existing nodes when a new node is added and cannot obtain the least-squares solution of output weight vectors. In this paper, we propose orthogonal convex incremental learning machine (OCI-ELM) with Gram-Schmidt orthogonalization method and Barron’s convex optimization learning method to solve the nonconvex optimization problem and least-squares solution problem, and then we give the rigorous proofs in theory. Moreover, in this paper, we propose a deep architecture based on stacked OCI-ELM autoencoders according to stacked generalization philosophy for solving large and complex data problems. The experimental results verified with both UCI datasets and large datasets demonstrate that the deep network based on stacked OCI-ELM autoencoders (DOC-IELM-AEs) outperforms the other methods mentioned in the paper with better performance on regression and classification problems.</abstract><cop>Cairo, Egypt</cop><pub>Hindawi Limiteds</pub><doi>10.1155/2016/1649486</doi><tpages>17</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1024-123X
ispartof Mathematical Problems in Engineering, 2016-01, Vol.2016 (2016), p.883-899
issn 1024-123X
1563-5147
language eng
recordid cdi_proquest_miscellaneous_1835655931
source Wiley-Blackwell Open Access Titles; EZB-FREE-00999 freely available EZB journals; Alma/SFX Local Collection
subjects Accuracy
Algorithms
Architectural engineering
Artificial neural networks
Convexity
Datasets
Engineering
Learning
Least squares
Least squares method
Machine learning
Mathematical analysis
Networks
Neural networks
Optimization
Regression
title Deep Network Based on Stacked Orthogonal Convex Incremental ELM Autoencoders
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T00%3A02%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Network%20Based%20on%20Stacked%20Orthogonal%20Convex%20Incremental%20ELM%20Autoencoders&rft.jtitle=Mathematical%20Problems%20in%20Engineering&rft.au=Wang,%20Chao&rft.date=2016-01-01&rft.volume=2016&rft.issue=2016&rft.spage=883&rft.epage=899&rft.pages=883-899&rft.issn=1024-123X&rft.eissn=1563-5147&rft_id=info:doi/10.1155/2016/1649486&rft_dat=%3Cproquest_cross%3E4112870781%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1802695880&rft_id=info:pmid/&rft_airiti_id=P20161117002_201612_201711200011_201711200011_883_899&rfr_iscdi=true