Independent Mechanism Analysis and the Manifold Hypothesis

Independent Mechanism Analysis (IMA) seeks to address non-identifiability in nonlinear Independent Component Analysis (ICA) by assuming that the Jacobian of the mixing function has orthogonal columns. As typical in ICA, previous work focused on the case with an equal number of latent components and...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ghosh, Shubhangi, Gresele, Luigi, von Kügelgen, Julius, Besserve, Michel, Schölkopf, Bernhard
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ghosh, Shubhangi
Gresele, Luigi
von Kügelgen, Julius
Besserve, Michel
Schölkopf, Bernhard
description Independent Mechanism Analysis (IMA) seeks to address non-identifiability in nonlinear Independent Component Analysis (ICA) by assuming that the Jacobian of the mixing function has orthogonal columns. As typical in ICA, previous work focused on the case with an equal number of latent components and observed mixtures. Here, we extend IMA to settings with a larger number of mixtures that reside on a manifold embedded in a higher-dimensional than the latent space -- in line with the manifold hypothesis in representation learning. For this setting, we show that IMA still circumvents several non-identifiability issues, suggesting that it can also be a beneficial principle for higher-dimensional observations when the manifold hypothesis holds. Further, we prove that the IMA principle is approximately satisfied with high probability (increasing with the number of observed mixtures) when the directions along which the latent components influence the observations are chosen independently at random. This provides a new and rigorous statistical interpretation of IMA.
doi_str_mv 10.48550/arxiv.2312.13438
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2312_13438</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2312_13438</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-c31ba899d5917840c9d58780ebc0d939e5aff9fd7267f6a2e9c9ffc9cd9ee2b83</originalsourceid><addsrcrecordid>eNotT81qwzAY82WH0e0BdppfIJl_ktjfbqVsa6Fhl97DF_szCSRuSEpp3n5et4skJBASYy9S5IUtS_GG862_5kpLlUtdaPvI3g_R00QJ4oXX5DqM_TLybcRhXfqFY_T80hGvkx_Og-f7dTonI2VP7CHgsNDzP2_Y6fPjtNtnx--vw257zLAyNnNatmgBfAnS2EK4pKyxglonPGigEkOA4I2qTKhQETgIwYHzQKRaqzfs9a_2Pr6Z5n7EeW1-TzT3E_oHYxRCjg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Independent Mechanism Analysis and the Manifold Hypothesis</title><source>arXiv.org</source><creator>Ghosh, Shubhangi ; Gresele, Luigi ; von Kügelgen, Julius ; Besserve, Michel ; Schölkopf, Bernhard</creator><creatorcontrib>Ghosh, Shubhangi ; Gresele, Luigi ; von Kügelgen, Julius ; Besserve, Michel ; Schölkopf, Bernhard</creatorcontrib><description>Independent Mechanism Analysis (IMA) seeks to address non-identifiability in nonlinear Independent Component Analysis (ICA) by assuming that the Jacobian of the mixing function has orthogonal columns. As typical in ICA, previous work focused on the case with an equal number of latent components and observed mixtures. Here, we extend IMA to settings with a larger number of mixtures that reside on a manifold embedded in a higher-dimensional than the latent space -- in line with the manifold hypothesis in representation learning. For this setting, we show that IMA still circumvents several non-identifiability issues, suggesting that it can also be a beneficial principle for higher-dimensional observations when the manifold hypothesis holds. Further, we prove that the IMA principle is approximately satisfied with high probability (increasing with the number of observed mixtures) when the directions along which the latent components influence the observations are chosen independently at random. This provides a new and rigorous statistical interpretation of IMA.</description><identifier>DOI: 10.48550/arxiv.2312.13438</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2023-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2312.13438$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2312.13438$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ghosh, Shubhangi</creatorcontrib><creatorcontrib>Gresele, Luigi</creatorcontrib><creatorcontrib>von Kügelgen, Julius</creatorcontrib><creatorcontrib>Besserve, Michel</creatorcontrib><creatorcontrib>Schölkopf, Bernhard</creatorcontrib><title>Independent Mechanism Analysis and the Manifold Hypothesis</title><description>Independent Mechanism Analysis (IMA) seeks to address non-identifiability in nonlinear Independent Component Analysis (ICA) by assuming that the Jacobian of the mixing function has orthogonal columns. As typical in ICA, previous work focused on the case with an equal number of latent components and observed mixtures. Here, we extend IMA to settings with a larger number of mixtures that reside on a manifold embedded in a higher-dimensional than the latent space -- in line with the manifold hypothesis in representation learning. For this setting, we show that IMA still circumvents several non-identifiability issues, suggesting that it can also be a beneficial principle for higher-dimensional observations when the manifold hypothesis holds. Further, we prove that the IMA principle is approximately satisfied with high probability (increasing with the number of observed mixtures) when the directions along which the latent components influence the observations are chosen independently at random. This provides a new and rigorous statistical interpretation of IMA.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotT81qwzAY82WH0e0BdppfIJl_ktjfbqVsa6Fhl97DF_szCSRuSEpp3n5et4skJBASYy9S5IUtS_GG862_5kpLlUtdaPvI3g_R00QJ4oXX5DqM_TLybcRhXfqFY_T80hGvkx_Og-f7dTonI2VP7CHgsNDzP2_Y6fPjtNtnx--vw257zLAyNnNatmgBfAnS2EK4pKyxglonPGigEkOA4I2qTKhQETgIwYHzQKRaqzfs9a_2Pr6Z5n7EeW1-TzT3E_oHYxRCjg</recordid><startdate>20231220</startdate><enddate>20231220</enddate><creator>Ghosh, Shubhangi</creator><creator>Gresele, Luigi</creator><creator>von Kügelgen, Julius</creator><creator>Besserve, Michel</creator><creator>Schölkopf, Bernhard</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20231220</creationdate><title>Independent Mechanism Analysis and the Manifold Hypothesis</title><author>Ghosh, Shubhangi ; Gresele, Luigi ; von Kügelgen, Julius ; Besserve, Michel ; Schölkopf, Bernhard</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-c31ba899d5917840c9d58780ebc0d939e5aff9fd7267f6a2e9c9ffc9cd9ee2b83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Ghosh, Shubhangi</creatorcontrib><creatorcontrib>Gresele, Luigi</creatorcontrib><creatorcontrib>von Kügelgen, Julius</creatorcontrib><creatorcontrib>Besserve, Michel</creatorcontrib><creatorcontrib>Schölkopf, Bernhard</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ghosh, Shubhangi</au><au>Gresele, Luigi</au><au>von Kügelgen, Julius</au><au>Besserve, Michel</au><au>Schölkopf, Bernhard</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Independent Mechanism Analysis and the Manifold Hypothesis</atitle><date>2023-12-20</date><risdate>2023</risdate><abstract>Independent Mechanism Analysis (IMA) seeks to address non-identifiability in nonlinear Independent Component Analysis (ICA) by assuming that the Jacobian of the mixing function has orthogonal columns. As typical in ICA, previous work focused on the case with an equal number of latent components and observed mixtures. Here, we extend IMA to settings with a larger number of mixtures that reside on a manifold embedded in a higher-dimensional than the latent space -- in line with the manifold hypothesis in representation learning. For this setting, we show that IMA still circumvents several non-identifiability issues, suggesting that it can also be a beneficial principle for higher-dimensional observations when the manifold hypothesis holds. Further, we prove that the IMA principle is approximately satisfied with high probability (increasing with the number of observed mixtures) when the directions along which the latent components influence the observations are chosen independently at random. This provides a new and rigorous statistical interpretation of IMA.</abstract><doi>10.48550/arxiv.2312.13438</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2312.13438
ispartof
issn
language eng
recordid cdi_arxiv_primary_2312_13438
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
title Independent Mechanism Analysis and the Manifold Hypothesis
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T10%3A58%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Independent%20Mechanism%20Analysis%20and%20the%20Manifold%20Hypothesis&rft.au=Ghosh,%20Shubhangi&rft.date=2023-12-20&rft_id=info:doi/10.48550/arxiv.2312.13438&rft_dat=%3Carxiv_GOX%3E2312_13438%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true