Robust Fisher-regularized extreme learning machine with asymmetric Welsch-induced loss function for classification

In general, it is a worth challenging problem to build a robust classifier for data sets with noises or outliers. Establishing a robust classifier is a more difficult problem for datasets with asymmetric noise distribution. The Fisher-regularized extreme learning machine (Fisher-ELM) considers the s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2024-07, Vol.54 (13-14), p.7352-7376
Hauptverfasser: Xue, Zhenxia, Zhao, Chongning, Wei, Shuqing, Ma, Jun, Lin, Shouhe
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 7376
container_issue 13-14
container_start_page 7352
container_title Applied intelligence (Dordrecht, Netherlands)
container_volume 54
creator Xue, Zhenxia
Zhao, Chongning
Wei, Shuqing
Ma, Jun
Lin, Shouhe
description In general, it is a worth challenging problem to build a robust classifier for data sets with noises or outliers. Establishing a robust classifier is a more difficult problem for datasets with asymmetric noise distribution. The Fisher-regularized extreme learning machine (Fisher-ELM) considers the statistical knowledge of the data, however, it ignores the impact of noises or outliers. In this paper, to reduce the negative influence of noises or outliers, we first put forward a novel asymmetric Welsch loss function named AW-loss based on asymmetric L 2 -loss function and Welsch loss function. Based on the AW-loss function, we then present a new robust Fisher-ELM called AWFisher-ELM. The proposed AWFisher-ELM not only takes into account the statistical information of the data, but also considers the impact of asymmetric distribution noises. We utilize concave-convex procedure (CCCP) and dual method to solve the non-convexity of the proposed AWFisher-ELM. Simultaneously, an algorithm for AWFisher-ELM is given and a theorem about the convergence of the algorithm is proved. To validate the effectiveness of our algorithm, we compare our AWFisher-ELM with the other state-of-the-art methods on artificial data sets, UCI data sets, NDC large data sets and image data sets by setting different ratios of noises. The experimental results are as follows, the accuracy of AWFisher-ELM is the highest in the artificial data sets, reaching 98.9%. For the large-scale NDC data sets and the image data sets, the accuracy of AWFisher-ELM is also the highest. For the ten UCI data sets, the accuracy and F 1 value of AWFisher-ELM are the highest in most data sets expect for Diabetes. In terms of training time, our AWFisher-ELM has almost the same training time with RHELM and CHELM, but it takes longer time than OPT-ELM, WCS-SVM, Fisher-SVM, Pinball-FisherSVM, and Fisher-ELM. This is because AWFisher-ELM, RHELM, and CHELM need to solve a convex quadratic subprogramming problem in each iteration. In conclusion, our method exhibits excellent generalization performance expect for the longer training time.
doi_str_mv 10.1007/s10489-024-05528-5
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3071629023</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3071629023</sourcerecordid><originalsourceid>FETCH-LOGICAL-c200t-927999fc8f374ebba1ee14abd22743a5f0214ce9156822288bb73e201690f64c3</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWKt_wFPAc3SS7G42RylWhYIgit5CNp20KftRk120_npXK3jzNDDzPjPDQ8g5h0sOoK4Sh6zUDETGIM9FyfIDMuG5kkxlWh2SCehxVBT69ZicpLQBACmBT0h87Koh9XQe0hoji7gaahvDJy4pfvQRG6Q12tiGdkUb69ahRfoe-jW1adc02Mfg6AvWya1ZaJeDG7m6S4n6oXV96Frqu0hdbVMKPjj73TolR97WCc9-65Q8z2-eZnds8XB7P7teMCcAeqaF0lp7V3qpMqwqyxF5ZqulECqTNvcgeOZQ87wohRBlWVVKogBeaPBF5uSUXOz3bmP3NmDqzaYbYjueNBIUL4QGIceU2KdcHP-O6M02hsbGneFgvt2avVszujU_bk0-QnIPpTHcrjD-rf6H-gJURH6E</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3071629023</pqid></control><display><type>article</type><title>Robust Fisher-regularized extreme learning machine with asymmetric Welsch-induced loss function for classification</title><source>Springer Nature - Complete Springer Journals</source><creator>Xue, Zhenxia ; Zhao, Chongning ; Wei, Shuqing ; Ma, Jun ; Lin, Shouhe</creator><creatorcontrib>Xue, Zhenxia ; Zhao, Chongning ; Wei, Shuqing ; Ma, Jun ; Lin, Shouhe</creatorcontrib><description>In general, it is a worth challenging problem to build a robust classifier for data sets with noises or outliers. Establishing a robust classifier is a more difficult problem for datasets with asymmetric noise distribution. The Fisher-regularized extreme learning machine (Fisher-ELM) considers the statistical knowledge of the data, however, it ignores the impact of noises or outliers. In this paper, to reduce the negative influence of noises or outliers, we first put forward a novel asymmetric Welsch loss function named AW-loss based on asymmetric L 2 -loss function and Welsch loss function. Based on the AW-loss function, we then present a new robust Fisher-ELM called AWFisher-ELM. The proposed AWFisher-ELM not only takes into account the statistical information of the data, but also considers the impact of asymmetric distribution noises. We utilize concave-convex procedure (CCCP) and dual method to solve the non-convexity of the proposed AWFisher-ELM. Simultaneously, an algorithm for AWFisher-ELM is given and a theorem about the convergence of the algorithm is proved. To validate the effectiveness of our algorithm, we compare our AWFisher-ELM with the other state-of-the-art methods on artificial data sets, UCI data sets, NDC large data sets and image data sets by setting different ratios of noises. The experimental results are as follows, the accuracy of AWFisher-ELM is the highest in the artificial data sets, reaching 98.9%. For the large-scale NDC data sets and the image data sets, the accuracy of AWFisher-ELM is also the highest. For the ten UCI data sets, the accuracy and F 1 value of AWFisher-ELM are the highest in most data sets expect for Diabetes. In terms of training time, our AWFisher-ELM has almost the same training time with RHELM and CHELM, but it takes longer time than OPT-ELM, WCS-SVM, Fisher-SVM, Pinball-FisherSVM, and Fisher-ELM. This is because AWFisher-ELM, RHELM, and CHELM need to solve a convex quadratic subprogramming problem in each iteration. In conclusion, our method exhibits excellent generalization performance expect for the longer training time.</description><identifier>ISSN: 0924-669X</identifier><identifier>EISSN: 1573-7497</identifier><identifier>DOI: 10.1007/s10489-024-05528-5</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Accuracy ; Algorithms ; Artificial Intelligence ; Artificial neural networks ; Classifiers ; Computer Science ; Convexity ; Datasets ; Machine learning ; Machines ; Manufacturing ; Mechanical Engineering ; Outliers (statistics) ; Processes ; Robustness ; Skewed distributions</subject><ispartof>Applied intelligence (Dordrecht, Netherlands), 2024-07, Vol.54 (13-14), p.7352-7376</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c200t-927999fc8f374ebba1ee14abd22743a5f0214ce9156822288bb73e201690f64c3</cites><orcidid>0000-0003-1952-9513</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10489-024-05528-5$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10489-024-05528-5$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Xue, Zhenxia</creatorcontrib><creatorcontrib>Zhao, Chongning</creatorcontrib><creatorcontrib>Wei, Shuqing</creatorcontrib><creatorcontrib>Ma, Jun</creatorcontrib><creatorcontrib>Lin, Shouhe</creatorcontrib><title>Robust Fisher-regularized extreme learning machine with asymmetric Welsch-induced loss function for classification</title><title>Applied intelligence (Dordrecht, Netherlands)</title><addtitle>Appl Intell</addtitle><description>In general, it is a worth challenging problem to build a robust classifier for data sets with noises or outliers. Establishing a robust classifier is a more difficult problem for datasets with asymmetric noise distribution. The Fisher-regularized extreme learning machine (Fisher-ELM) considers the statistical knowledge of the data, however, it ignores the impact of noises or outliers. In this paper, to reduce the negative influence of noises or outliers, we first put forward a novel asymmetric Welsch loss function named AW-loss based on asymmetric L 2 -loss function and Welsch loss function. Based on the AW-loss function, we then present a new robust Fisher-ELM called AWFisher-ELM. The proposed AWFisher-ELM not only takes into account the statistical information of the data, but also considers the impact of asymmetric distribution noises. We utilize concave-convex procedure (CCCP) and dual method to solve the non-convexity of the proposed AWFisher-ELM. Simultaneously, an algorithm for AWFisher-ELM is given and a theorem about the convergence of the algorithm is proved. To validate the effectiveness of our algorithm, we compare our AWFisher-ELM with the other state-of-the-art methods on artificial data sets, UCI data sets, NDC large data sets and image data sets by setting different ratios of noises. The experimental results are as follows, the accuracy of AWFisher-ELM is the highest in the artificial data sets, reaching 98.9%. For the large-scale NDC data sets and the image data sets, the accuracy of AWFisher-ELM is also the highest. For the ten UCI data sets, the accuracy and F 1 value of AWFisher-ELM are the highest in most data sets expect for Diabetes. In terms of training time, our AWFisher-ELM has almost the same training time with RHELM and CHELM, but it takes longer time than OPT-ELM, WCS-SVM, Fisher-SVM, Pinball-FisherSVM, and Fisher-ELM. This is because AWFisher-ELM, RHELM, and CHELM need to solve a convex quadratic subprogramming problem in each iteration. In conclusion, our method exhibits excellent generalization performance expect for the longer training time.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Classifiers</subject><subject>Computer Science</subject><subject>Convexity</subject><subject>Datasets</subject><subject>Machine learning</subject><subject>Machines</subject><subject>Manufacturing</subject><subject>Mechanical Engineering</subject><subject>Outliers (statistics)</subject><subject>Processes</subject><subject>Robustness</subject><subject>Skewed distributions</subject><issn>0924-669X</issn><issn>1573-7497</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kE1LAzEQhoMoWKt_wFPAc3SS7G42RylWhYIgit5CNp20KftRk120_npXK3jzNDDzPjPDQ8g5h0sOoK4Sh6zUDETGIM9FyfIDMuG5kkxlWh2SCehxVBT69ZicpLQBACmBT0h87Koh9XQe0hoji7gaahvDJy4pfvQRG6Q12tiGdkUb69ahRfoe-jW1adc02Mfg6AvWya1ZaJeDG7m6S4n6oXV96Frqu0hdbVMKPjj73TolR97WCc9-65Q8z2-eZnds8XB7P7teMCcAeqaF0lp7V3qpMqwqyxF5ZqulECqTNvcgeOZQ87wohRBlWVVKogBeaPBF5uSUXOz3bmP3NmDqzaYbYjueNBIUL4QGIceU2KdcHP-O6M02hsbGneFgvt2avVszujU_bk0-QnIPpTHcrjD-rf6H-gJURH6E</recordid><startdate>20240701</startdate><enddate>20240701</enddate><creator>Xue, Zhenxia</creator><creator>Zhao, Chongning</creator><creator>Wei, Shuqing</creator><creator>Ma, Jun</creator><creator>Lin, Shouhe</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-1952-9513</orcidid></search><sort><creationdate>20240701</creationdate><title>Robust Fisher-regularized extreme learning machine with asymmetric Welsch-induced loss function for classification</title><author>Xue, Zhenxia ; Zhao, Chongning ; Wei, Shuqing ; Ma, Jun ; Lin, Shouhe</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c200t-927999fc8f374ebba1ee14abd22743a5f0214ce9156822288bb73e201690f64c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Classifiers</topic><topic>Computer Science</topic><topic>Convexity</topic><topic>Datasets</topic><topic>Machine learning</topic><topic>Machines</topic><topic>Manufacturing</topic><topic>Mechanical Engineering</topic><topic>Outliers (statistics)</topic><topic>Processes</topic><topic>Robustness</topic><topic>Skewed distributions</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xue, Zhenxia</creatorcontrib><creatorcontrib>Zhao, Chongning</creatorcontrib><creatorcontrib>Wei, Shuqing</creatorcontrib><creatorcontrib>Ma, Jun</creatorcontrib><creatorcontrib>Lin, Shouhe</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xue, Zhenxia</au><au>Zhao, Chongning</au><au>Wei, Shuqing</au><au>Ma, Jun</au><au>Lin, Shouhe</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Robust Fisher-regularized extreme learning machine with asymmetric Welsch-induced loss function for classification</atitle><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle><stitle>Appl Intell</stitle><date>2024-07-01</date><risdate>2024</risdate><volume>54</volume><issue>13-14</issue><spage>7352</spage><epage>7376</epage><pages>7352-7376</pages><issn>0924-669X</issn><eissn>1573-7497</eissn><abstract>In general, it is a worth challenging problem to build a robust classifier for data sets with noises or outliers. Establishing a robust classifier is a more difficult problem for datasets with asymmetric noise distribution. The Fisher-regularized extreme learning machine (Fisher-ELM) considers the statistical knowledge of the data, however, it ignores the impact of noises or outliers. In this paper, to reduce the negative influence of noises or outliers, we first put forward a novel asymmetric Welsch loss function named AW-loss based on asymmetric L 2 -loss function and Welsch loss function. Based on the AW-loss function, we then present a new robust Fisher-ELM called AWFisher-ELM. The proposed AWFisher-ELM not only takes into account the statistical information of the data, but also considers the impact of asymmetric distribution noises. We utilize concave-convex procedure (CCCP) and dual method to solve the non-convexity of the proposed AWFisher-ELM. Simultaneously, an algorithm for AWFisher-ELM is given and a theorem about the convergence of the algorithm is proved. To validate the effectiveness of our algorithm, we compare our AWFisher-ELM with the other state-of-the-art methods on artificial data sets, UCI data sets, NDC large data sets and image data sets by setting different ratios of noises. The experimental results are as follows, the accuracy of AWFisher-ELM is the highest in the artificial data sets, reaching 98.9%. For the large-scale NDC data sets and the image data sets, the accuracy of AWFisher-ELM is also the highest. For the ten UCI data sets, the accuracy and F 1 value of AWFisher-ELM are the highest in most data sets expect for Diabetes. In terms of training time, our AWFisher-ELM has almost the same training time with RHELM and CHELM, but it takes longer time than OPT-ELM, WCS-SVM, Fisher-SVM, Pinball-FisherSVM, and Fisher-ELM. This is because AWFisher-ELM, RHELM, and CHELM need to solve a convex quadratic subprogramming problem in each iteration. In conclusion, our method exhibits excellent generalization performance expect for the longer training time.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10489-024-05528-5</doi><tpages>25</tpages><orcidid>https://orcid.org/0000-0003-1952-9513</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0924-669X
ispartof Applied intelligence (Dordrecht, Netherlands), 2024-07, Vol.54 (13-14), p.7352-7376
issn 0924-669X
1573-7497
language eng
recordid cdi_proquest_journals_3071629023
source Springer Nature - Complete Springer Journals
subjects Accuracy
Algorithms
Artificial Intelligence
Artificial neural networks
Classifiers
Computer Science
Convexity
Datasets
Machine learning
Machines
Manufacturing
Mechanical Engineering
Outliers (statistics)
Processes
Robustness
Skewed distributions
title Robust Fisher-regularized extreme learning machine with asymmetric Welsch-induced loss function for classification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T08%3A46%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Robust%20Fisher-regularized%20extreme%20learning%20machine%20with%20asymmetric%20Welsch-induced%20loss%20function%20for%20classification&rft.jtitle=Applied%20intelligence%20(Dordrecht,%20Netherlands)&rft.au=Xue,%20Zhenxia&rft.date=2024-07-01&rft.volume=54&rft.issue=13-14&rft.spage=7352&rft.epage=7376&rft.pages=7352-7376&rft.issn=0924-669X&rft.eissn=1573-7497&rft_id=info:doi/10.1007/s10489-024-05528-5&rft_dat=%3Cproquest_cross%3E3071629023%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3071629023&rft_id=info:pmid/&rfr_iscdi=true