Semi‐supervised multiple empirical kernel learning with pseudo empirical loss and similarity regularization
Multiple empirical kernel learning (MEKL) is a scalable and efficient supervised algorithm based on labeled samples. However, there is still a huge amount of unlabeled samples in the real‐world application, which are not applicable for the supervised algorithm. To fully utilize the spatial distribut...
Gespeichert in:
Veröffentlicht in: | International journal of intelligent systems 2022-02, Vol.37 (2), p.1674-1696 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1696 |
---|---|
container_issue | 2 |
container_start_page | 1674 |
container_title | International journal of intelligent systems |
container_volume | 37 |
creator | Guo, Wei Wang, Zhe Ma, Menghao Chen, Lilong Yang, Hai Li, Dongdong Du, Wenli |
description | Multiple empirical kernel learning (MEKL) is a scalable and efficient supervised algorithm based on labeled samples. However, there is still a huge amount of unlabeled samples in the real‐world application, which are not applicable for the supervised algorithm. To fully utilize the spatial distribution information of the unlabeled samples, this paper proposes a novel semi‐supervised multiple empirical kernel learning (SSMEKL). SSMEKL enables multiple empirical kernel learning to achieve better classification performance with a small number of labeled samples and a large number of unlabeled samples. First, SSMEKL uses the collaborative information of multiple kernels to provide a pseudo labels to some unlabeled samples in the optimization process of the model, and SSMEKL designs pseudo‐empirical loss to transform learning process of the unlabeled samples into supervised learning. Second, SSMEKL designs the similarity regularization for unlabeled samples to make full use of the spatial information of unlabeled samples. It is required that the output of unlabeled samples should be similar to the neighboring labeled samples to improve the classification performance of the model. The proposed SSMEKL can improve the performance of the classifier by using a small number of labeled samples and numerous unlabeled samples to improve the classification performance of MEKL. In the experiment, the results on four real‐world data sets and two multiview data sets validate the effectiveness and superiority of the proposed SSMEKL. |
doi_str_mv | 10.1002/int.22690 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2614667279</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2614667279</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3320-1d6c077ce515bb9fd904a5a59579877e60301f6e2d02d605e12d8607c2a71dc43</originalsourceid><addsrcrecordid>eNp10LtOwzAUBmALgUQpDLyBJSaGtMdOYicjqrhUqmCgSGyRG58UF-eCnVCViUfgGXkSAmFgYTpn-M5FPyGnDCYMgE9N1U44FynskRGDNAkYY4_7ZARJEgUJk-EhOfJ-A8CYjOIRKe-xNJ_vH75r0L0aj5qWnW1NY5Fi2RhncmXpM7oKLbWoXGWqNd2a9ok2Hjtd_1G29p6qSlNvSmOVM-2OOlx33-2bak1dHZODQlmPJ791TB6uLpezm2Bxdz2fXSyCPAw5BEyLHKTMMWbxapUWOoVIxSpOY5kmUqKAEFghkGvgWkCMjOtEgMy5kkznUTgmZ8PextUvHfo229Sdq_qTGRcsEkJymfbqfFC56z93WGSNM6Vyu4xB9p1m1qeZ_aTZ2-lgt8bi7n-YzW-Xw8QXiBd5ew</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2614667279</pqid></control><display><type>article</type><title>Semi‐supervised multiple empirical kernel learning with pseudo empirical loss and similarity regularization</title><source>Wiley Online Library Journals Frontfile Complete</source><creator>Guo, Wei ; Wang, Zhe ; Ma, Menghao ; Chen, Lilong ; Yang, Hai ; Li, Dongdong ; Du, Wenli</creator><creatorcontrib>Guo, Wei ; Wang, Zhe ; Ma, Menghao ; Chen, Lilong ; Yang, Hai ; Li, Dongdong ; Du, Wenli</creatorcontrib><description>Multiple empirical kernel learning (MEKL) is a scalable and efficient supervised algorithm based on labeled samples. However, there is still a huge amount of unlabeled samples in the real‐world application, which are not applicable for the supervised algorithm. To fully utilize the spatial distribution information of the unlabeled samples, this paper proposes a novel semi‐supervised multiple empirical kernel learning (SSMEKL). SSMEKL enables multiple empirical kernel learning to achieve better classification performance with a small number of labeled samples and a large number of unlabeled samples. First, SSMEKL uses the collaborative information of multiple kernels to provide a pseudo labels to some unlabeled samples in the optimization process of the model, and SSMEKL designs pseudo‐empirical loss to transform learning process of the unlabeled samples into supervised learning. Second, SSMEKL designs the similarity regularization for unlabeled samples to make full use of the spatial information of unlabeled samples. It is required that the output of unlabeled samples should be similar to the neighboring labeled samples to improve the classification performance of the model. The proposed SSMEKL can improve the performance of the classifier by using a small number of labeled samples and numerous unlabeled samples to improve the classification performance of MEKL. In the experiment, the results on four real‐world data sets and two multiview data sets validate the effectiveness and superiority of the proposed SSMEKL.</description><identifier>ISSN: 0884-8173</identifier><identifier>EISSN: 1098-111X</identifier><identifier>DOI: 10.1002/int.22690</identifier><language>eng</language><publisher>New York: Hindawi Limited</publisher><subject>Algorithms ; Classification ; Datasets ; Intelligent systems ; Kernels ; Machine learning ; multiple empirical kernel learning ; multiple kernel learning ; Optimization ; Performance enhancement ; Regularization ; semi‐supervised learning ; Similarity ; Spatial data ; Spatial distribution ; supervised learning</subject><ispartof>International journal of intelligent systems, 2022-02, Vol.37 (2), p.1674-1696</ispartof><rights>2021 Wiley Periodicals LLC</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3320-1d6c077ce515bb9fd904a5a59579877e60301f6e2d02d605e12d8607c2a71dc43</citedby><cites>FETCH-LOGICAL-c3320-1d6c077ce515bb9fd904a5a59579877e60301f6e2d02d605e12d8607c2a71dc43</cites><orcidid>0000-0002-3977-7963 ; 0000-0002-1880-8054 ; 0000-0003-1215-7876 ; 0000-0002-1161-4337 ; 0000-0002-3759-2041 ; 0000-0002-4240-1940 ; 0000-0002-2676-6341</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fint.22690$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fint.22690$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,776,780,1411,27903,27904,45553,45554</link.rule.ids></links><search><creatorcontrib>Guo, Wei</creatorcontrib><creatorcontrib>Wang, Zhe</creatorcontrib><creatorcontrib>Ma, Menghao</creatorcontrib><creatorcontrib>Chen, Lilong</creatorcontrib><creatorcontrib>Yang, Hai</creatorcontrib><creatorcontrib>Li, Dongdong</creatorcontrib><creatorcontrib>Du, Wenli</creatorcontrib><title>Semi‐supervised multiple empirical kernel learning with pseudo empirical loss and similarity regularization</title><title>International journal of intelligent systems</title><description>Multiple empirical kernel learning (MEKL) is a scalable and efficient supervised algorithm based on labeled samples. However, there is still a huge amount of unlabeled samples in the real‐world application, which are not applicable for the supervised algorithm. To fully utilize the spatial distribution information of the unlabeled samples, this paper proposes a novel semi‐supervised multiple empirical kernel learning (SSMEKL). SSMEKL enables multiple empirical kernel learning to achieve better classification performance with a small number of labeled samples and a large number of unlabeled samples. First, SSMEKL uses the collaborative information of multiple kernels to provide a pseudo labels to some unlabeled samples in the optimization process of the model, and SSMEKL designs pseudo‐empirical loss to transform learning process of the unlabeled samples into supervised learning. Second, SSMEKL designs the similarity regularization for unlabeled samples to make full use of the spatial information of unlabeled samples. It is required that the output of unlabeled samples should be similar to the neighboring labeled samples to improve the classification performance of the model. The proposed SSMEKL can improve the performance of the classifier by using a small number of labeled samples and numerous unlabeled samples to improve the classification performance of MEKL. In the experiment, the results on four real‐world data sets and two multiview data sets validate the effectiveness and superiority of the proposed SSMEKL.</description><subject>Algorithms</subject><subject>Classification</subject><subject>Datasets</subject><subject>Intelligent systems</subject><subject>Kernels</subject><subject>Machine learning</subject><subject>multiple empirical kernel learning</subject><subject>multiple kernel learning</subject><subject>Optimization</subject><subject>Performance enhancement</subject><subject>Regularization</subject><subject>semi‐supervised learning</subject><subject>Similarity</subject><subject>Spatial data</subject><subject>Spatial distribution</subject><subject>supervised learning</subject><issn>0884-8173</issn><issn>1098-111X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp10LtOwzAUBmALgUQpDLyBJSaGtMdOYicjqrhUqmCgSGyRG58UF-eCnVCViUfgGXkSAmFgYTpn-M5FPyGnDCYMgE9N1U44FynskRGDNAkYY4_7ZARJEgUJk-EhOfJ-A8CYjOIRKe-xNJ_vH75r0L0aj5qWnW1NY5Fi2RhncmXpM7oKLbWoXGWqNd2a9ok2Hjtd_1G29p6qSlNvSmOVM-2OOlx33-2bak1dHZODQlmPJ791TB6uLpezm2Bxdz2fXSyCPAw5BEyLHKTMMWbxapUWOoVIxSpOY5kmUqKAEFghkGvgWkCMjOtEgMy5kkznUTgmZ8PextUvHfo229Sdq_qTGRcsEkJymfbqfFC56z93WGSNM6Vyu4xB9p1m1qeZ_aTZ2-lgt8bi7n-YzW-Xw8QXiBd5ew</recordid><startdate>202202</startdate><enddate>202202</enddate><creator>Guo, Wei</creator><creator>Wang, Zhe</creator><creator>Ma, Menghao</creator><creator>Chen, Lilong</creator><creator>Yang, Hai</creator><creator>Li, Dongdong</creator><creator>Du, Wenli</creator><general>Hindawi Limited</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-3977-7963</orcidid><orcidid>https://orcid.org/0000-0002-1880-8054</orcidid><orcidid>https://orcid.org/0000-0003-1215-7876</orcidid><orcidid>https://orcid.org/0000-0002-1161-4337</orcidid><orcidid>https://orcid.org/0000-0002-3759-2041</orcidid><orcidid>https://orcid.org/0000-0002-4240-1940</orcidid><orcidid>https://orcid.org/0000-0002-2676-6341</orcidid></search><sort><creationdate>202202</creationdate><title>Semi‐supervised multiple empirical kernel learning with pseudo empirical loss and similarity regularization</title><author>Guo, Wei ; Wang, Zhe ; Ma, Menghao ; Chen, Lilong ; Yang, Hai ; Li, Dongdong ; Du, Wenli</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3320-1d6c077ce515bb9fd904a5a59579877e60301f6e2d02d605e12d8607c2a71dc43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Classification</topic><topic>Datasets</topic><topic>Intelligent systems</topic><topic>Kernels</topic><topic>Machine learning</topic><topic>multiple empirical kernel learning</topic><topic>multiple kernel learning</topic><topic>Optimization</topic><topic>Performance enhancement</topic><topic>Regularization</topic><topic>semi‐supervised learning</topic><topic>Similarity</topic><topic>Spatial data</topic><topic>Spatial distribution</topic><topic>supervised learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Guo, Wei</creatorcontrib><creatorcontrib>Wang, Zhe</creatorcontrib><creatorcontrib>Ma, Menghao</creatorcontrib><creatorcontrib>Chen, Lilong</creatorcontrib><creatorcontrib>Yang, Hai</creatorcontrib><creatorcontrib>Li, Dongdong</creatorcontrib><creatorcontrib>Du, Wenli</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>International journal of intelligent systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Guo, Wei</au><au>Wang, Zhe</au><au>Ma, Menghao</au><au>Chen, Lilong</au><au>Yang, Hai</au><au>Li, Dongdong</au><au>Du, Wenli</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Semi‐supervised multiple empirical kernel learning with pseudo empirical loss and similarity regularization</atitle><jtitle>International journal of intelligent systems</jtitle><date>2022-02</date><risdate>2022</risdate><volume>37</volume><issue>2</issue><spage>1674</spage><epage>1696</epage><pages>1674-1696</pages><issn>0884-8173</issn><eissn>1098-111X</eissn><abstract>Multiple empirical kernel learning (MEKL) is a scalable and efficient supervised algorithm based on labeled samples. However, there is still a huge amount of unlabeled samples in the real‐world application, which are not applicable for the supervised algorithm. To fully utilize the spatial distribution information of the unlabeled samples, this paper proposes a novel semi‐supervised multiple empirical kernel learning (SSMEKL). SSMEKL enables multiple empirical kernel learning to achieve better classification performance with a small number of labeled samples and a large number of unlabeled samples. First, SSMEKL uses the collaborative information of multiple kernels to provide a pseudo labels to some unlabeled samples in the optimization process of the model, and SSMEKL designs pseudo‐empirical loss to transform learning process of the unlabeled samples into supervised learning. Second, SSMEKL designs the similarity regularization for unlabeled samples to make full use of the spatial information of unlabeled samples. It is required that the output of unlabeled samples should be similar to the neighboring labeled samples to improve the classification performance of the model. The proposed SSMEKL can improve the performance of the classifier by using a small number of labeled samples and numerous unlabeled samples to improve the classification performance of MEKL. In the experiment, the results on four real‐world data sets and two multiview data sets validate the effectiveness and superiority of the proposed SSMEKL.</abstract><cop>New York</cop><pub>Hindawi Limited</pub><doi>10.1002/int.22690</doi><tpages>23</tpages><orcidid>https://orcid.org/0000-0002-3977-7963</orcidid><orcidid>https://orcid.org/0000-0002-1880-8054</orcidid><orcidid>https://orcid.org/0000-0003-1215-7876</orcidid><orcidid>https://orcid.org/0000-0002-1161-4337</orcidid><orcidid>https://orcid.org/0000-0002-3759-2041</orcidid><orcidid>https://orcid.org/0000-0002-4240-1940</orcidid><orcidid>https://orcid.org/0000-0002-2676-6341</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0884-8173 |
ispartof | International journal of intelligent systems, 2022-02, Vol.37 (2), p.1674-1696 |
issn | 0884-8173 1098-111X |
language | eng |
recordid | cdi_proquest_journals_2614667279 |
source | Wiley Online Library Journals Frontfile Complete |
subjects | Algorithms Classification Datasets Intelligent systems Kernels Machine learning multiple empirical kernel learning multiple kernel learning Optimization Performance enhancement Regularization semi‐supervised learning Similarity Spatial data Spatial distribution supervised learning |
title | Semi‐supervised multiple empirical kernel learning with pseudo empirical loss and similarity regularization |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T16%3A25%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Semi%E2%80%90supervised%20multiple%20empirical%20kernel%20learning%20with%20pseudo%20empirical%20loss%20and%20similarity%20regularization&rft.jtitle=International%20journal%20of%20intelligent%20systems&rft.au=Guo,%20Wei&rft.date=2022-02&rft.volume=37&rft.issue=2&rft.spage=1674&rft.epage=1696&rft.pages=1674-1696&rft.issn=0884-8173&rft.eissn=1098-111X&rft_id=info:doi/10.1002/int.22690&rft_dat=%3Cproquest_cross%3E2614667279%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2614667279&rft_id=info:pmid/&rfr_iscdi=true |