On the Convergence Rate of Gaussianization with Random Rotations

Gaussianization is a simple generative model that can be trained without backpropagation. It has shown compelling performance on low dimensional data. As the dimension increases, however, it has been observed that the convergence speed slows down. We show analytically that the number of required lay...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Draxler, Felix, Kühmichel, Lars, Rousselot, Armand, Müller, Jens, Schnörr, Christoph, Köthe, Ullrich
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Draxler, Felix
Kühmichel, Lars
Rousselot, Armand
Müller, Jens
Schnörr, Christoph
Köthe, Ullrich
description Gaussianization is a simple generative model that can be trained without backpropagation. It has shown compelling performance on low dimensional data. As the dimension increases, however, it has been observed that the convergence speed slows down. We show analytically that the number of required layers scales linearly with the dimension for Gaussian input. We argue that this is because the model is unable to capture dependencies between dimensions. Empirically, we find the same linear increase in cost for arbitrary input $p(x)$, but observe favorable scaling for some distributions. We explore potential speed-ups and formulate challenges for further research.
doi_str_mv 10.48550/arxiv.2306.13520
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2306_13520</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2306_13520</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-f98ed78e55fe067fe9d89d8a02a499225ce8c5f6c0f2696649c16508583487053</originalsourceid><addsrcrecordid>eNotj81qwzAQhHXpoaR5gJ6iF7C7lryydGsxTVoIBELuZpFXjaCRiu2mP0_f1C0MDMwHA58QtxWUtUWEOxo-47lUGkxZaVRwLe53SU5Hlm1OZx5eOHmWe5pY5iA39D6OkVL8pinmJD_idLzA1OeT3OdpHscbcRXodeTlfy_EYf14aJ-K7W7z3D5sCzINFMFZ7hvLiIHBNIFdby8hUFQ7pxR6th6D8RCUccbUzlcGwaLVtW0A9UKs_m5nhe5tiCcavrpflW5W0T_jhULo</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>On the Convergence Rate of Gaussianization with Random Rotations</title><source>arXiv.org</source><creator>Draxler, Felix ; Kühmichel, Lars ; Rousselot, Armand ; Müller, Jens ; Schnörr, Christoph ; Köthe, Ullrich</creator><creatorcontrib>Draxler, Felix ; Kühmichel, Lars ; Rousselot, Armand ; Müller, Jens ; Schnörr, Christoph ; Köthe, Ullrich</creatorcontrib><description>Gaussianization is a simple generative model that can be trained without backpropagation. It has shown compelling performance on low dimensional data. As the dimension increases, however, it has been observed that the convergence speed slows down. We show analytically that the number of required layers scales linearly with the dimension for Gaussian input. We argue that this is because the model is unable to capture dependencies between dimensions. Empirically, we find the same linear increase in cost for arbitrary input $p(x)$, but observe favorable scaling for some distributions. We explore potential speed-ups and formulate challenges for further research.</description><identifier>DOI: 10.48550/arxiv.2306.13520</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2023-06</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2306.13520$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2306.13520$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Draxler, Felix</creatorcontrib><creatorcontrib>Kühmichel, Lars</creatorcontrib><creatorcontrib>Rousselot, Armand</creatorcontrib><creatorcontrib>Müller, Jens</creatorcontrib><creatorcontrib>Schnörr, Christoph</creatorcontrib><creatorcontrib>Köthe, Ullrich</creatorcontrib><title>On the Convergence Rate of Gaussianization with Random Rotations</title><description>Gaussianization is a simple generative model that can be trained without backpropagation. It has shown compelling performance on low dimensional data. As the dimension increases, however, it has been observed that the convergence speed slows down. We show analytically that the number of required layers scales linearly with the dimension for Gaussian input. We argue that this is because the model is unable to capture dependencies between dimensions. Empirically, we find the same linear increase in cost for arbitrary input $p(x)$, but observe favorable scaling for some distributions. We explore potential speed-ups and formulate challenges for further research.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81qwzAQhHXpoaR5gJ6iF7C7lryydGsxTVoIBELuZpFXjaCRiu2mP0_f1C0MDMwHA58QtxWUtUWEOxo-47lUGkxZaVRwLe53SU5Hlm1OZx5eOHmWe5pY5iA39D6OkVL8pinmJD_idLzA1OeT3OdpHscbcRXodeTlfy_EYf14aJ-K7W7z3D5sCzINFMFZ7hvLiIHBNIFdby8hUFQ7pxR6th6D8RCUccbUzlcGwaLVtW0A9UKs_m5nhe5tiCcavrpflW5W0T_jhULo</recordid><startdate>20230623</startdate><enddate>20230623</enddate><creator>Draxler, Felix</creator><creator>Kühmichel, Lars</creator><creator>Rousselot, Armand</creator><creator>Müller, Jens</creator><creator>Schnörr, Christoph</creator><creator>Köthe, Ullrich</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20230623</creationdate><title>On the Convergence Rate of Gaussianization with Random Rotations</title><author>Draxler, Felix ; Kühmichel, Lars ; Rousselot, Armand ; Müller, Jens ; Schnörr, Christoph ; Köthe, Ullrich</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-f98ed78e55fe067fe9d89d8a02a499225ce8c5f6c0f2696649c16508583487053</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Draxler, Felix</creatorcontrib><creatorcontrib>Kühmichel, Lars</creatorcontrib><creatorcontrib>Rousselot, Armand</creatorcontrib><creatorcontrib>Müller, Jens</creatorcontrib><creatorcontrib>Schnörr, Christoph</creatorcontrib><creatorcontrib>Köthe, Ullrich</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Draxler, Felix</au><au>Kühmichel, Lars</au><au>Rousselot, Armand</au><au>Müller, Jens</au><au>Schnörr, Christoph</au><au>Köthe, Ullrich</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>On the Convergence Rate of Gaussianization with Random Rotations</atitle><date>2023-06-23</date><risdate>2023</risdate><abstract>Gaussianization is a simple generative model that can be trained without backpropagation. It has shown compelling performance on low dimensional data. As the dimension increases, however, it has been observed that the convergence speed slows down. We show analytically that the number of required layers scales linearly with the dimension for Gaussian input. We argue that this is because the model is unable to capture dependencies between dimensions. Empirically, we find the same linear increase in cost for arbitrary input $p(x)$, but observe favorable scaling for some distributions. We explore potential speed-ups and formulate challenges for further research.</abstract><doi>10.48550/arxiv.2306.13520</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2306.13520
ispartof
issn
language eng
recordid cdi_arxiv_primary_2306_13520
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
title On the Convergence Rate of Gaussianization with Random Rotations
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T02%3A24%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=On%20the%20Convergence%20Rate%20of%20Gaussianization%20with%20Random%20Rotations&rft.au=Draxler,%20Felix&rft.date=2023-06-23&rft_id=info:doi/10.48550/arxiv.2306.13520&rft_dat=%3Carxiv_GOX%3E2306_13520%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true