Diffusion Bridge AutoEncoders for Unsupervised Representation Learning

Diffusion-based representation learning has achieved substantial attention due to its promising capabilities in latent representation and sample generation. Recent studies have employed an auxiliary encoder to identify a corresponding representation from a sample and to adjust the dimensionality of...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kim, Yeongmin, Lee, Kwanghyeon, Park, Minsang, Na, Byeonghu, Moon, Il-Chul
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Kim, Yeongmin
Lee, Kwanghyeon
Park, Minsang
Na, Byeonghu
Moon, Il-Chul
description Diffusion-based representation learning has achieved substantial attention due to its promising capabilities in latent representation and sample generation. Recent studies have employed an auxiliary encoder to identify a corresponding representation from a sample and to adjust the dimensionality of a latent variable z. Meanwhile, this auxiliary structure invokes information split problem because the diffusion and the auxiliary encoder would divide the information from the sample into two representations for each model. Particularly, the information modeled by the diffusion becomes over-regularized because of the static prior distribution on xT. To address this problem, we introduce Diffusion Bridge AuteEncoders (DBAE), which enable z-dependent endpoint xT inference through a feed-forward architecture. This structure creates an information bottleneck at z, so xT becomes dependent on z in its generation. This results in two consequences: 1) z holds the full information of samples, and 2) xT becomes a learnable distribution, not static any further. We propose an objective function for DBAE to enable both reconstruction and generative modeling, with their theoretical justification. Empirical evidence supports the effectiveness of the intended design in DBAE, which notably enhances downstream inference quality, reconstruction, and disentanglement. Additionally, DBAE generates high-fidelity samples in the unconditional generation.
doi_str_mv 10.48550/arxiv.2405.17111
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2405_17111</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2405_17111</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-73bd77f2477c38b9c489dad2b5a32058592d6ffa99fb554a9df60bc5b5cb14603</originalsourceid><addsrcrecordid>eNotz71OwzAYhWEvDKhwAUz4BhLs2F8cj6W0gBQJCZU5-vxXWWqdyE4quHvUwnSm90gPIQ-c1bIDYE-Yv-O5biSDmivO-S3ZvcQQlhLHRJ9zdAdP18s8bpMdnc-FhjHTr1SWyedzLN7RTz9lX3yacb40vcecYjrckZuAx-Lv_3dF9rvtfvNW9R-v75t1X2GreKWEcUqFRiplRWe0lZ126BoDKBoGHejGtSGg1sEASNQutMxYMGANly0TK_L4d3uFDFOOJ8w_wwU0XEHiF3E-RtM</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Diffusion Bridge AutoEncoders for Unsupervised Representation Learning</title><source>arXiv.org</source><creator>Kim, Yeongmin ; Lee, Kwanghyeon ; Park, Minsang ; Na, Byeonghu ; Moon, Il-Chul</creator><creatorcontrib>Kim, Yeongmin ; Lee, Kwanghyeon ; Park, Minsang ; Na, Byeonghu ; Moon, Il-Chul</creatorcontrib><description>Diffusion-based representation learning has achieved substantial attention due to its promising capabilities in latent representation and sample generation. Recent studies have employed an auxiliary encoder to identify a corresponding representation from a sample and to adjust the dimensionality of a latent variable z. Meanwhile, this auxiliary structure invokes information split problem because the diffusion and the auxiliary encoder would divide the information from the sample into two representations for each model. Particularly, the information modeled by the diffusion becomes over-regularized because of the static prior distribution on xT. To address this problem, we introduce Diffusion Bridge AuteEncoders (DBAE), which enable z-dependent endpoint xT inference through a feed-forward architecture. This structure creates an information bottleneck at z, so xT becomes dependent on z in its generation. This results in two consequences: 1) z holds the full information of samples, and 2) xT becomes a learnable distribution, not static any further. We propose an objective function for DBAE to enable both reconstruction and generative modeling, with their theoretical justification. Empirical evidence supports the effectiveness of the intended design in DBAE, which notably enhances downstream inference quality, reconstruction, and disentanglement. Additionally, DBAE generates high-fidelity samples in the unconditional generation.</description><identifier>DOI: 10.48550/arxiv.2405.17111</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2024-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2405.17111$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2405.17111$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kim, Yeongmin</creatorcontrib><creatorcontrib>Lee, Kwanghyeon</creatorcontrib><creatorcontrib>Park, Minsang</creatorcontrib><creatorcontrib>Na, Byeonghu</creatorcontrib><creatorcontrib>Moon, Il-Chul</creatorcontrib><title>Diffusion Bridge AutoEncoders for Unsupervised Representation Learning</title><description>Diffusion-based representation learning has achieved substantial attention due to its promising capabilities in latent representation and sample generation. Recent studies have employed an auxiliary encoder to identify a corresponding representation from a sample and to adjust the dimensionality of a latent variable z. Meanwhile, this auxiliary structure invokes information split problem because the diffusion and the auxiliary encoder would divide the information from the sample into two representations for each model. Particularly, the information modeled by the diffusion becomes over-regularized because of the static prior distribution on xT. To address this problem, we introduce Diffusion Bridge AuteEncoders (DBAE), which enable z-dependent endpoint xT inference through a feed-forward architecture. This structure creates an information bottleneck at z, so xT becomes dependent on z in its generation. This results in two consequences: 1) z holds the full information of samples, and 2) xT becomes a learnable distribution, not static any further. We propose an objective function for DBAE to enable both reconstruction and generative modeling, with their theoretical justification. Empirical evidence supports the effectiveness of the intended design in DBAE, which notably enhances downstream inference quality, reconstruction, and disentanglement. Additionally, DBAE generates high-fidelity samples in the unconditional generation.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAYhWEvDKhwAUz4BhLs2F8cj6W0gBQJCZU5-vxXWWqdyE4quHvUwnSm90gPIQ-c1bIDYE-Yv-O5biSDmivO-S3ZvcQQlhLHRJ9zdAdP18s8bpMdnc-FhjHTr1SWyedzLN7RTz9lX3yacb40vcecYjrckZuAx-Lv_3dF9rvtfvNW9R-v75t1X2GreKWEcUqFRiplRWe0lZ126BoDKBoGHejGtSGg1sEASNQutMxYMGANly0TK_L4d3uFDFOOJ8w_wwU0XEHiF3E-RtM</recordid><startdate>20240527</startdate><enddate>20240527</enddate><creator>Kim, Yeongmin</creator><creator>Lee, Kwanghyeon</creator><creator>Park, Minsang</creator><creator>Na, Byeonghu</creator><creator>Moon, Il-Chul</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240527</creationdate><title>Diffusion Bridge AutoEncoders for Unsupervised Representation Learning</title><author>Kim, Yeongmin ; Lee, Kwanghyeon ; Park, Minsang ; Na, Byeonghu ; Moon, Il-Chul</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-73bd77f2477c38b9c489dad2b5a32058592d6ffa99fb554a9df60bc5b5cb14603</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Kim, Yeongmin</creatorcontrib><creatorcontrib>Lee, Kwanghyeon</creatorcontrib><creatorcontrib>Park, Minsang</creatorcontrib><creatorcontrib>Na, Byeonghu</creatorcontrib><creatorcontrib>Moon, Il-Chul</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kim, Yeongmin</au><au>Lee, Kwanghyeon</au><au>Park, Minsang</au><au>Na, Byeonghu</au><au>Moon, Il-Chul</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Diffusion Bridge AutoEncoders for Unsupervised Representation Learning</atitle><date>2024-05-27</date><risdate>2024</risdate><abstract>Diffusion-based representation learning has achieved substantial attention due to its promising capabilities in latent representation and sample generation. Recent studies have employed an auxiliary encoder to identify a corresponding representation from a sample and to adjust the dimensionality of a latent variable z. Meanwhile, this auxiliary structure invokes information split problem because the diffusion and the auxiliary encoder would divide the information from the sample into two representations for each model. Particularly, the information modeled by the diffusion becomes over-regularized because of the static prior distribution on xT. To address this problem, we introduce Diffusion Bridge AuteEncoders (DBAE), which enable z-dependent endpoint xT inference through a feed-forward architecture. This structure creates an information bottleneck at z, so xT becomes dependent on z in its generation. This results in two consequences: 1) z holds the full information of samples, and 2) xT becomes a learnable distribution, not static any further. We propose an objective function for DBAE to enable both reconstruction and generative modeling, with their theoretical justification. Empirical evidence supports the effectiveness of the intended design in DBAE, which notably enhances downstream inference quality, reconstruction, and disentanglement. Additionally, DBAE generates high-fidelity samples in the unconditional generation.</abstract><doi>10.48550/arxiv.2405.17111</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2405.17111
ispartof
issn
language eng
recordid cdi_arxiv_primary_2405_17111
source arXiv.org
subjects Computer Science - Learning
title Diffusion Bridge AutoEncoders for Unsupervised Representation Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T00%3A05%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Diffusion%20Bridge%20AutoEncoders%20for%20Unsupervised%20Representation%20Learning&rft.au=Kim,%20Yeongmin&rft.date=2024-05-27&rft_id=info:doi/10.48550/arxiv.2405.17111&rft_dat=%3Carxiv_GOX%3E2405_17111%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true