Discriminative and Robust Autoencoders for Unsupervised Feature Selection

Many recent research works on unsupervised feature selection (UFS) have focused on how to exploit autoencoders (AEs) to seek informative features. However, existing methods typically employ the squared error to estimate the data reconstruction, which amplifies the negative effect of outliers and can...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2023-12, Vol.PP, p.1-15
Hauptverfasser: Ling, Yunzhi, Nie, Feiping, Yu, Weizhong, Li, Xuelong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 15
container_issue
container_start_page 1
container_title IEEE transaction on neural networks and learning systems
container_volume PP
creator Ling, Yunzhi
Nie, Feiping
Yu, Weizhong
Li, Xuelong
description Many recent research works on unsupervised feature selection (UFS) have focused on how to exploit autoencoders (AEs) to seek informative features. However, existing methods typically employ the squared error to estimate the data reconstruction, which amplifies the negative effect of outliers and can lead to performance degradation. Moreover, traditional AEs aim to extract latent features that capture intrinsic information of the data for accurate data recovery. Without incorporating explicit cluster structure-detecting objectives into the training criterion, AEs fail to capture the latent cluster structure of the data which is essential for identifying discriminative features. Thus, the selected features lack strong discriminative power. To address the issues, we propose to jointly perform robust feature selection and k -means clustering in a unified framework. Concretely, we exploit an AE with a l_{2,1} -norm as a basic model to seek informative features. To improve robustness against outliers, we introduce an adaptive weight vector for the data reconstruction terms of AE, which assigns smaller weights to the data with larger errors to automatically reduce the influence of the outliers, and larger weights to the data with smaller errors to strengthen the influence of clean data. To enhance the discriminative power of the selected features, we incorporate k -means clustering into the representation learning of the AE. This allows the AE to continually explore cluster structure information, which can be used to discover more discriminative features. Then, we also present an efficient approach to solve the objective of the corresponding problem. Extensive experiments on various benchmark datasets are provided, which clearly demonstrate that the proposed method outperforms state-of-the-art methods.
doi_str_mv 10.1109/TNNLS.2023.3333737
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TNNLS_2023_3333737</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10355877</ieee_id><sourcerecordid>2902973656</sourcerecordid><originalsourceid>FETCH-LOGICAL-c275t-c848567ef044ef47ebb7c5da1fae380ece6af4947c17c07f549818a58418d6c33</originalsourceid><addsrcrecordid>eNpNkE9Lw0AQxRdRbKn9AiKSo5fU_b-bY6lWC6WCbcHbstnMQqRN6m5S8Nub2lqcy8zhvce8H0K3BI8IwdnjarGYL0cUUzZi3SimLlCfEklTyrS-PN_qo4eGMX7ibiQWkmfXqMc0zrBWrI9mT2V0odyWlW3KPSS2KpL3Om9jk4zbpobK1QWEmPg6JOsqtjsI-zJCkUzBNm2AZAkbcE1ZVzfoyttNhOFpD9B6-ryavKbzt5fZZDxPHVWiSZ3mWkgFHnMOnivIc-VEYYm30L0FDqT1POPKEeWw8oJnmmgrNCe6kI6xAXo45u5C_dVCbMy2qwCbja2gbqOhGaaZYlLITkqPUhfqGAN4s-uq2vBtCDYHiuaXojlQNCeKnen-lN_mWyjOlj9mneDuKCgB4F8iE0IrxX4A0AJ3IQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2902973656</pqid></control><display><type>article</type><title>Discriminative and Robust Autoencoders for Unsupervised Feature Selection</title><source>IEEE Electronic Library (IEL)</source><creator>Ling, Yunzhi ; Nie, Feiping ; Yu, Weizhong ; Li, Xuelong</creator><creatorcontrib>Ling, Yunzhi ; Nie, Feiping ; Yu, Weizhong ; Li, Xuelong</creatorcontrib><description><![CDATA[Many recent research works on unsupervised feature selection (UFS) have focused on how to exploit autoencoders (AEs) to seek informative features. However, existing methods typically employ the squared error to estimate the data reconstruction, which amplifies the negative effect of outliers and can lead to performance degradation. Moreover, traditional AEs aim to extract latent features that capture intrinsic information of the data for accurate data recovery. Without incorporating explicit cluster structure-detecting objectives into the training criterion, AEs fail to capture the latent cluster structure of the data which is essential for identifying discriminative features. Thus, the selected features lack strong discriminative power. To address the issues, we propose to jointly perform robust feature selection and <inline-formula> <tex-math notation="LaTeX">k</tex-math> </inline-formula>-means clustering in a unified framework. Concretely, we exploit an AE with a <inline-formula> <tex-math notation="LaTeX">l_{2,1}</tex-math> </inline-formula>-norm as a basic model to seek informative features. To improve robustness against outliers, we introduce an adaptive weight vector for the data reconstruction terms of AE, which assigns smaller weights to the data with larger errors to automatically reduce the influence of the outliers, and larger weights to the data with smaller errors to strengthen the influence of clean data. To enhance the discriminative power of the selected features, we incorporate <inline-formula> <tex-math notation="LaTeX">k</tex-math> </inline-formula>-means clustering into the representation learning of the AE. This allows the AE to continually explore cluster structure information, which can be used to discover more discriminative features. Then, we also present an efficient approach to solve the objective of the corresponding problem. Extensive experiments on various benchmark datasets are provided, which clearly demonstrate that the proposed method outperforms state-of-the-art methods.]]></description><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2023.3333737</identifier><identifier>PMID: 38090873</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Autoencoders (AEs) ; clustering ; Clustering algorithms ; Data models ; Feature extraction ; feature selection ; neural networks ; Optics ; Representation learning ; Robustness ; Training data ; unsupervised learning</subject><ispartof>IEEE transaction on neural networks and learning systems, 2023-12, Vol.PP, p.1-15</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-0871-6519 ; 0000-0003-2924-946X ; 0000-0003-2906-8191 ; 0000-0002-3755-2226</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10355877$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>315,781,785,797,27928,27929,54762</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10355877$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38090873$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Ling, Yunzhi</creatorcontrib><creatorcontrib>Nie, Feiping</creatorcontrib><creatorcontrib>Yu, Weizhong</creatorcontrib><creatorcontrib>Li, Xuelong</creatorcontrib><title>Discriminative and Robust Autoencoders for Unsupervised Feature Selection</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description><![CDATA[Many recent research works on unsupervised feature selection (UFS) have focused on how to exploit autoencoders (AEs) to seek informative features. However, existing methods typically employ the squared error to estimate the data reconstruction, which amplifies the negative effect of outliers and can lead to performance degradation. Moreover, traditional AEs aim to extract latent features that capture intrinsic information of the data for accurate data recovery. Without incorporating explicit cluster structure-detecting objectives into the training criterion, AEs fail to capture the latent cluster structure of the data which is essential for identifying discriminative features. Thus, the selected features lack strong discriminative power. To address the issues, we propose to jointly perform robust feature selection and <inline-formula> <tex-math notation="LaTeX">k</tex-math> </inline-formula>-means clustering in a unified framework. Concretely, we exploit an AE with a <inline-formula> <tex-math notation="LaTeX">l_{2,1}</tex-math> </inline-formula>-norm as a basic model to seek informative features. To improve robustness against outliers, we introduce an adaptive weight vector for the data reconstruction terms of AE, which assigns smaller weights to the data with larger errors to automatically reduce the influence of the outliers, and larger weights to the data with smaller errors to strengthen the influence of clean data. To enhance the discriminative power of the selected features, we incorporate <inline-formula> <tex-math notation="LaTeX">k</tex-math> </inline-formula>-means clustering into the representation learning of the AE. This allows the AE to continually explore cluster structure information, which can be used to discover more discriminative features. Then, we also present an efficient approach to solve the objective of the corresponding problem. Extensive experiments on various benchmark datasets are provided, which clearly demonstrate that the proposed method outperforms state-of-the-art methods.]]></description><subject>Autoencoders (AEs)</subject><subject>clustering</subject><subject>Clustering algorithms</subject><subject>Data models</subject><subject>Feature extraction</subject><subject>feature selection</subject><subject>neural networks</subject><subject>Optics</subject><subject>Representation learning</subject><subject>Robustness</subject><subject>Training data</subject><subject>unsupervised learning</subject><issn>2162-237X</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkE9Lw0AQxRdRbKn9AiKSo5fU_b-bY6lWC6WCbcHbstnMQqRN6m5S8Nub2lqcy8zhvce8H0K3BI8IwdnjarGYL0cUUzZi3SimLlCfEklTyrS-PN_qo4eGMX7ibiQWkmfXqMc0zrBWrI9mT2V0odyWlW3KPSS2KpL3Om9jk4zbpobK1QWEmPg6JOsqtjsI-zJCkUzBNm2AZAkbcE1ZVzfoyttNhOFpD9B6-ryavKbzt5fZZDxPHVWiSZ3mWkgFHnMOnivIc-VEYYm30L0FDqT1POPKEeWw8oJnmmgrNCe6kI6xAXo45u5C_dVCbMy2qwCbja2gbqOhGaaZYlLITkqPUhfqGAN4s-uq2vBtCDYHiuaXojlQNCeKnen-lN_mWyjOlj9mneDuKCgB4F8iE0IrxX4A0AJ3IQ</recordid><startdate>20231212</startdate><enddate>20231212</enddate><creator>Ling, Yunzhi</creator><creator>Nie, Feiping</creator><creator>Yu, Weizhong</creator><creator>Li, Xuelong</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-0871-6519</orcidid><orcidid>https://orcid.org/0000-0003-2924-946X</orcidid><orcidid>https://orcid.org/0000-0003-2906-8191</orcidid><orcidid>https://orcid.org/0000-0002-3755-2226</orcidid></search><sort><creationdate>20231212</creationdate><title>Discriminative and Robust Autoencoders for Unsupervised Feature Selection</title><author>Ling, Yunzhi ; Nie, Feiping ; Yu, Weizhong ; Li, Xuelong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c275t-c848567ef044ef47ebb7c5da1fae380ece6af4947c17c07f549818a58418d6c33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Autoencoders (AEs)</topic><topic>clustering</topic><topic>Clustering algorithms</topic><topic>Data models</topic><topic>Feature extraction</topic><topic>feature selection</topic><topic>neural networks</topic><topic>Optics</topic><topic>Representation learning</topic><topic>Robustness</topic><topic>Training data</topic><topic>unsupervised learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Ling, Yunzhi</creatorcontrib><creatorcontrib>Nie, Feiping</creatorcontrib><creatorcontrib>Yu, Weizhong</creatorcontrib><creatorcontrib>Li, Xuelong</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ling, Yunzhi</au><au>Nie, Feiping</au><au>Yu, Weizhong</au><au>Li, Xuelong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Discriminative and Robust Autoencoders for Unsupervised Feature Selection</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2023-12-12</date><risdate>2023</risdate><volume>PP</volume><spage>1</spage><epage>15</epage><pages>1-15</pages><issn>2162-237X</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract><![CDATA[Many recent research works on unsupervised feature selection (UFS) have focused on how to exploit autoencoders (AEs) to seek informative features. However, existing methods typically employ the squared error to estimate the data reconstruction, which amplifies the negative effect of outliers and can lead to performance degradation. Moreover, traditional AEs aim to extract latent features that capture intrinsic information of the data for accurate data recovery. Without incorporating explicit cluster structure-detecting objectives into the training criterion, AEs fail to capture the latent cluster structure of the data which is essential for identifying discriminative features. Thus, the selected features lack strong discriminative power. To address the issues, we propose to jointly perform robust feature selection and <inline-formula> <tex-math notation="LaTeX">k</tex-math> </inline-formula>-means clustering in a unified framework. Concretely, we exploit an AE with a <inline-formula> <tex-math notation="LaTeX">l_{2,1}</tex-math> </inline-formula>-norm as a basic model to seek informative features. To improve robustness against outliers, we introduce an adaptive weight vector for the data reconstruction terms of AE, which assigns smaller weights to the data with larger errors to automatically reduce the influence of the outliers, and larger weights to the data with smaller errors to strengthen the influence of clean data. To enhance the discriminative power of the selected features, we incorporate <inline-formula> <tex-math notation="LaTeX">k</tex-math> </inline-formula>-means clustering into the representation learning of the AE. This allows the AE to continually explore cluster structure information, which can be used to discover more discriminative features. Then, we also present an efficient approach to solve the objective of the corresponding problem. Extensive experiments on various benchmark datasets are provided, which clearly demonstrate that the proposed method outperforms state-of-the-art methods.]]></abstract><cop>United States</cop><pub>IEEE</pub><pmid>38090873</pmid><doi>10.1109/TNNLS.2023.3333737</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-0871-6519</orcidid><orcidid>https://orcid.org/0000-0003-2924-946X</orcidid><orcidid>https://orcid.org/0000-0003-2906-8191</orcidid><orcidid>https://orcid.org/0000-0002-3755-2226</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2162-237X
ispartof IEEE transaction on neural networks and learning systems, 2023-12, Vol.PP, p.1-15
issn 2162-237X
2162-2388
language eng
recordid cdi_crossref_primary_10_1109_TNNLS_2023_3333737
source IEEE Electronic Library (IEL)
subjects Autoencoders (AEs)
clustering
Clustering algorithms
Data models
Feature extraction
feature selection
neural networks
Optics
Representation learning
Robustness
Training data
unsupervised learning
title Discriminative and Robust Autoencoders for Unsupervised Feature Selection
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-17T11%3A06%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Discriminative%20and%20Robust%20Autoencoders%20for%20Unsupervised%20Feature%20Selection&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Ling,%20Yunzhi&rft.date=2023-12-12&rft.volume=PP&rft.spage=1&rft.epage=15&rft.pages=1-15&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2023.3333737&rft_dat=%3Cproquest_RIE%3E2902973656%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2902973656&rft_id=info:pmid/38090873&rft_ieee_id=10355877&rfr_iscdi=true