Self-Supervised Information Bottleneck for Deep Multi-View Subspace Clustering
In this paper, we explore the problem of deep multi-view subspace clustering framework from an information-theoretic point of view. We extend the traditional information bottleneck principle to learn common information among different views in a self-supervised manner, and accordingly establish a ne...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on image processing 2023-01, Vol.PP, p.1-1 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE transactions on image processing |
container_volume | PP |
creator | Wang, Shiye Li, Changsheng Li, Yanming Yuan, Ye Wang, Guoren |
description | In this paper, we explore the problem of deep multi-view subspace clustering framework from an information-theoretic point of view. We extend the traditional information bottleneck principle to learn common information among different views in a self-supervised manner, and accordingly establish a new framework called Self-supervised Information Bottleneck based Multi-view Subspace Clustering (SIB-MSC). Inheriting the advantages from information bottleneck, SIB-MSC can learn a latent space for each view to capture common information among the latent representations of different views by removing superfluous information from the view itself while retaining sufficient information for the latent representations of other views. Actually, the latent representation of each view provides a kind of self-supervised signal for training the latent representations of other views. Moreover, SIB-MSC attempts to disengage the other latent space for each view to capture the view-specific information by introducing mutual information based regularization terms, so as to further improve the performance of multi-view subspace clustering. Extensive experiments on real-world multi-view data demonstrate that our method achieves superior performance over the related state-of-the-art methods. |
doi_str_mv | 10.1109/TIP.2023.3246802 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_pubmed_primary_37027595</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10053658</ieee_id><sourcerecordid>2798710760</sourcerecordid><originalsourceid>FETCH-LOGICAL-c348t-298c2b918b65a4b0eb1849fa6015ac91e9a838c04157585e02a39efb6d06d5e73</originalsourceid><addsrcrecordid>eNpdkElLxEAQRhtR3O8eRAJevGSs3ruPOm4DbjDqNXQyFYlmktidKP57W2YU8VRF8b6P4hGyR2FEKdjjh8n9iAHjI86EMsBWyCa1gqYAgq3GHaRONRV2g2yF8AJAhaRqnWxwDUxLKzfJ7RTrMp0OHfr3KuAsmTRl6-eur9omOW37vsYGi9ckHpMzxC65Geq-Sp8q_EimQx46V2AyrofQo6-a5x2yVro64O5ybpPHi_OH8VV6fXc5GZ9cpwUXpk-ZNQXLLTW5kk7kgDk1wpZOAZWusBStM9wUIKjU0kgE5rjFMlczUDOJmm-To0Vv59u3AUOfzatQYF27BtshZExboyloBRE9_Ie-tINv4neRMkJxLiSPFCyowrcheCyzzldz5z8zCtm36yy6zr5dZ0vXMXKwLB7yOc5-Az9yI7C_ACpE_NMHkitp-BevZIFU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2784633453</pqid></control><display><type>article</type><title>Self-Supervised Information Bottleneck for Deep Multi-View Subspace Clustering</title><source>IEEE Electronic Library (IEL)</source><creator>Wang, Shiye ; Li, Changsheng ; Li, Yanming ; Yuan, Ye ; Wang, Guoren</creator><creatorcontrib>Wang, Shiye ; Li, Changsheng ; Li, Yanming ; Yuan, Ye ; Wang, Guoren</creatorcontrib><description>In this paper, we explore the problem of deep multi-view subspace clustering framework from an information-theoretic point of view. We extend the traditional information bottleneck principle to learn common information among different views in a self-supervised manner, and accordingly establish a new framework called Self-supervised Information Bottleneck based Multi-view Subspace Clustering (SIB-MSC). Inheriting the advantages from information bottleneck, SIB-MSC can learn a latent space for each view to capture common information among the latent representations of different views by removing superfluous information from the view itself while retaining sufficient information for the latent representations of other views. Actually, the latent representation of each view provides a kind of self-supervised signal for training the latent representations of other views. Moreover, SIB-MSC attempts to disengage the other latent space for each view to capture the view-specific information by introducing mutual information based regularization terms, so as to further improve the performance of multi-view subspace clustering. Extensive experiments on real-world multi-view data demonstrate that our method achieves superior performance over the related state-of-the-art methods.</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2023.3246802</identifier><identifier>PMID: 37027595</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Clustering ; Data models ; Deep learning ; Feature extraction ; Information bottleneck ; Information theory ; multi-view ; Mutual information ; Performance enhancement ; Regularization ; Representation learning ; Representations ; self-supervised learning ; subspace clustering ; Subspaces ; Task analysis ; Training</subject><ispartof>IEEE transactions on image processing, 2023-01, Vol.PP, p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c348t-298c2b918b65a4b0eb1849fa6015ac91e9a838c04157585e02a39efb6d06d5e73</citedby><cites>FETCH-LOGICAL-c348t-298c2b918b65a4b0eb1849fa6015ac91e9a838c04157585e02a39efb6d06d5e73</cites><orcidid>0000-0002-8973-0231 ; 0000-0001-9789-7632 ; 0000-0002-0181-8379 ; 0000-0002-0247-9866</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10053658$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10053658$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37027595$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Shiye</creatorcontrib><creatorcontrib>Li, Changsheng</creatorcontrib><creatorcontrib>Li, Yanming</creatorcontrib><creatorcontrib>Yuan, Ye</creatorcontrib><creatorcontrib>Wang, Guoren</creatorcontrib><title>Self-Supervised Information Bottleneck for Deep Multi-View Subspace Clustering</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><addtitle>IEEE Trans Image Process</addtitle><description>In this paper, we explore the problem of deep multi-view subspace clustering framework from an information-theoretic point of view. We extend the traditional information bottleneck principle to learn common information among different views in a self-supervised manner, and accordingly establish a new framework called Self-supervised Information Bottleneck based Multi-view Subspace Clustering (SIB-MSC). Inheriting the advantages from information bottleneck, SIB-MSC can learn a latent space for each view to capture common information among the latent representations of different views by removing superfluous information from the view itself while retaining sufficient information for the latent representations of other views. Actually, the latent representation of each view provides a kind of self-supervised signal for training the latent representations of other views. Moreover, SIB-MSC attempts to disengage the other latent space for each view to capture the view-specific information by introducing mutual information based regularization terms, so as to further improve the performance of multi-view subspace clustering. Extensive experiments on real-world multi-view data demonstrate that our method achieves superior performance over the related state-of-the-art methods.</description><subject>Clustering</subject><subject>Data models</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>Information bottleneck</subject><subject>Information theory</subject><subject>multi-view</subject><subject>Mutual information</subject><subject>Performance enhancement</subject><subject>Regularization</subject><subject>Representation learning</subject><subject>Representations</subject><subject>self-supervised learning</subject><subject>subspace clustering</subject><subject>Subspaces</subject><subject>Task analysis</subject><subject>Training</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkElLxEAQRhtR3O8eRAJevGSs3ruPOm4DbjDqNXQyFYlmktidKP57W2YU8VRF8b6P4hGyR2FEKdjjh8n9iAHjI86EMsBWyCa1gqYAgq3GHaRONRV2g2yF8AJAhaRqnWxwDUxLKzfJ7RTrMp0OHfr3KuAsmTRl6-eur9omOW37vsYGi9ckHpMzxC65Geq-Sp8q_EimQx46V2AyrofQo6-a5x2yVro64O5ybpPHi_OH8VV6fXc5GZ9cpwUXpk-ZNQXLLTW5kk7kgDk1wpZOAZWusBStM9wUIKjU0kgE5rjFMlczUDOJmm-To0Vv59u3AUOfzatQYF27BtshZExboyloBRE9_Ie-tINv4neRMkJxLiSPFCyowrcheCyzzldz5z8zCtm36yy6zr5dZ0vXMXKwLB7yOc5-Az9yI7C_ACpE_NMHkitp-BevZIFU</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Wang, Shiye</creator><creator>Li, Changsheng</creator><creator>Li, Yanming</creator><creator>Yuan, Ye</creator><creator>Wang, Guoren</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-8973-0231</orcidid><orcidid>https://orcid.org/0000-0001-9789-7632</orcidid><orcidid>https://orcid.org/0000-0002-0181-8379</orcidid><orcidid>https://orcid.org/0000-0002-0247-9866</orcidid></search><sort><creationdate>20230101</creationdate><title>Self-Supervised Information Bottleneck for Deep Multi-View Subspace Clustering</title><author>Wang, Shiye ; Li, Changsheng ; Li, Yanming ; Yuan, Ye ; Wang, Guoren</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c348t-298c2b918b65a4b0eb1849fa6015ac91e9a838c04157585e02a39efb6d06d5e73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Clustering</topic><topic>Data models</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>Information bottleneck</topic><topic>Information theory</topic><topic>multi-view</topic><topic>Mutual information</topic><topic>Performance enhancement</topic><topic>Regularization</topic><topic>Representation learning</topic><topic>Representations</topic><topic>self-supervised learning</topic><topic>subspace clustering</topic><topic>Subspaces</topic><topic>Task analysis</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Shiye</creatorcontrib><creatorcontrib>Li, Changsheng</creatorcontrib><creatorcontrib>Li, Yanming</creatorcontrib><creatorcontrib>Yuan, Ye</creatorcontrib><creatorcontrib>Wang, Guoren</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Shiye</au><au>Li, Changsheng</au><au>Li, Yanming</au><au>Yuan, Ye</au><au>Wang, Guoren</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Self-Supervised Information Bottleneck for Deep Multi-View Subspace Clustering</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><addtitle>IEEE Trans Image Process</addtitle><date>2023-01-01</date><risdate>2023</risdate><volume>PP</volume><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>In this paper, we explore the problem of deep multi-view subspace clustering framework from an information-theoretic point of view. We extend the traditional information bottleneck principle to learn common information among different views in a self-supervised manner, and accordingly establish a new framework called Self-supervised Information Bottleneck based Multi-view Subspace Clustering (SIB-MSC). Inheriting the advantages from information bottleneck, SIB-MSC can learn a latent space for each view to capture common information among the latent representations of different views by removing superfluous information from the view itself while retaining sufficient information for the latent representations of other views. Actually, the latent representation of each view provides a kind of self-supervised signal for training the latent representations of other views. Moreover, SIB-MSC attempts to disengage the other latent space for each view to capture the view-specific information by introducing mutual information based regularization terms, so as to further improve the performance of multi-view subspace clustering. Extensive experiments on real-world multi-view data demonstrate that our method achieves superior performance over the related state-of-the-art methods.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>37027595</pmid><doi>10.1109/TIP.2023.3246802</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-8973-0231</orcidid><orcidid>https://orcid.org/0000-0001-9789-7632</orcidid><orcidid>https://orcid.org/0000-0002-0181-8379</orcidid><orcidid>https://orcid.org/0000-0002-0247-9866</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1057-7149 |
ispartof | IEEE transactions on image processing, 2023-01, Vol.PP, p.1-1 |
issn | 1057-7149 1941-0042 |
language | eng |
recordid | cdi_pubmed_primary_37027595 |
source | IEEE Electronic Library (IEL) |
subjects | Clustering Data models Deep learning Feature extraction Information bottleneck Information theory multi-view Mutual information Performance enhancement Regularization Representation learning Representations self-supervised learning subspace clustering Subspaces Task analysis Training |
title | Self-Supervised Information Bottleneck for Deep Multi-View Subspace Clustering |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T20%3A08%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Self-Supervised%20Information%20Bottleneck%20for%20Deep%20Multi-View%20Subspace%20Clustering&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Wang,%20Shiye&rft.date=2023-01-01&rft.volume=PP&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2023.3246802&rft_dat=%3Cproquest_RIE%3E2798710760%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2784633453&rft_id=info:pmid/37027595&rft_ieee_id=10053658&rfr_iscdi=true |