Idempotent Generative Network

We propose a new approach for generative modeling based on training a neural network to be idempotent. An idempotent operator is one that can be applied sequentially without changing the result beyond the initial application, namely \(f(f(z))=f(z)\). The proposed model \(f\) is trained to map a sour...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-11
Hauptverfasser: Assaf Shocher, Dravid, Amil, Gandelsman, Yossi, Mosseri, Inbar, Rubinstein, Michael, Efros, Alexei A
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Assaf Shocher
Dravid, Amil
Gandelsman, Yossi
Mosseri, Inbar
Rubinstein, Michael
Efros, Alexei A
description We propose a new approach for generative modeling based on training a neural network to be idempotent. An idempotent operator is one that can be applied sequentially without changing the result beyond the initial application, namely \(f(f(z))=f(z)\). The proposed model \(f\) is trained to map a source distribution (e.g, Gaussian noise) to a target distribution (e.g. realistic images) using the following objectives: (1) Instances from the target distribution should map to themselves, namely \(f(x)=x\). We define the target manifold as the set of all instances that \(f\) maps to themselves. (2) Instances that form the source distribution should map onto the defined target manifold. This is achieved by optimizing the idempotence term, \(f(f(z))=f(z)\) which encourages the range of \(f(z)\) to be on the target manifold. Under ideal assumptions such a process provably converges to the target distribution. This strategy results in a model capable of generating an output in one step, maintaining a consistent latent space, while also allowing sequential applications for refinement. Additionally, we find that by processing inputs from both target and source distributions, the model adeptly projects corrupted or modified data back to the target manifold. This work is a first step towards a ``global projector'' that enables projecting any input into a target data distribution.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2885672157</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2885672157</sourcerecordid><originalsourceid>FETCH-proquest_journals_28856721573</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSQ9UxJzS3IL0nNK1FwT81LLUosySxLVfBLLSnPL8rmYWBNS8wpTuWF0twMym6uIc4eugVF-YWlqcUl8Vn5pUV5QKl4IwsLU6ChhqbmxsSpAgB8ESxV</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2885672157</pqid></control><display><type>article</type><title>Idempotent Generative Network</title><source>Freely Accessible Journals</source><creator>Assaf Shocher ; Dravid, Amil ; Gandelsman, Yossi ; Mosseri, Inbar ; Rubinstein, Michael ; Efros, Alexei A</creator><creatorcontrib>Assaf Shocher ; Dravid, Amil ; Gandelsman, Yossi ; Mosseri, Inbar ; Rubinstein, Michael ; Efros, Alexei A</creatorcontrib><description>We propose a new approach for generative modeling based on training a neural network to be idempotent. An idempotent operator is one that can be applied sequentially without changing the result beyond the initial application, namely \(f(f(z))=f(z)\). The proposed model \(f\) is trained to map a source distribution (e.g, Gaussian noise) to a target distribution (e.g. realistic images) using the following objectives: (1) Instances from the target distribution should map to themselves, namely \(f(x)=x\). We define the target manifold as the set of all instances that \(f\) maps to themselves. (2) Instances that form the source distribution should map onto the defined target manifold. This is achieved by optimizing the idempotence term, \(f(f(z))=f(z)\) which encourages the range of \(f(z)\) to be on the target manifold. Under ideal assumptions such a process provably converges to the target distribution. This strategy results in a model capable of generating an output in one step, maintaining a consistent latent space, while also allowing sequential applications for refinement. Additionally, we find that by processing inputs from both target and source distributions, the model adeptly projects corrupted or modified data back to the target manifold. This work is a first step towards a ``global projector'' that enables projecting any input into a target data distribution.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Manifolds ; Neural networks ; Normal distribution ; Operators (mathematics) ; Random noise</subject><ispartof>arXiv.org, 2023-11</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>781,785</link.rule.ids></links><search><creatorcontrib>Assaf Shocher</creatorcontrib><creatorcontrib>Dravid, Amil</creatorcontrib><creatorcontrib>Gandelsman, Yossi</creatorcontrib><creatorcontrib>Mosseri, Inbar</creatorcontrib><creatorcontrib>Rubinstein, Michael</creatorcontrib><creatorcontrib>Efros, Alexei A</creatorcontrib><title>Idempotent Generative Network</title><title>arXiv.org</title><description>We propose a new approach for generative modeling based on training a neural network to be idempotent. An idempotent operator is one that can be applied sequentially without changing the result beyond the initial application, namely \(f(f(z))=f(z)\). The proposed model \(f\) is trained to map a source distribution (e.g, Gaussian noise) to a target distribution (e.g. realistic images) using the following objectives: (1) Instances from the target distribution should map to themselves, namely \(f(x)=x\). We define the target manifold as the set of all instances that \(f\) maps to themselves. (2) Instances that form the source distribution should map onto the defined target manifold. This is achieved by optimizing the idempotence term, \(f(f(z))=f(z)\) which encourages the range of \(f(z)\) to be on the target manifold. Under ideal assumptions such a process provably converges to the target distribution. This strategy results in a model capable of generating an output in one step, maintaining a consistent latent space, while also allowing sequential applications for refinement. Additionally, we find that by processing inputs from both target and source distributions, the model adeptly projects corrupted or modified data back to the target manifold. This work is a first step towards a ``global projector'' that enables projecting any input into a target data distribution.</description><subject>Manifolds</subject><subject>Neural networks</subject><subject>Normal distribution</subject><subject>Operators (mathematics)</subject><subject>Random noise</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSQ9UxJzS3IL0nNK1FwT81LLUosySxLVfBLLSnPL8rmYWBNS8wpTuWF0twMym6uIc4eugVF-YWlqcUl8Vn5pUV5QKl4IwsLU6ChhqbmxsSpAgB8ESxV</recordid><startdate>20231102</startdate><enddate>20231102</enddate><creator>Assaf Shocher</creator><creator>Dravid, Amil</creator><creator>Gandelsman, Yossi</creator><creator>Mosseri, Inbar</creator><creator>Rubinstein, Michael</creator><creator>Efros, Alexei A</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231102</creationdate><title>Idempotent Generative Network</title><author>Assaf Shocher ; Dravid, Amil ; Gandelsman, Yossi ; Mosseri, Inbar ; Rubinstein, Michael ; Efros, Alexei A</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28856721573</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Manifolds</topic><topic>Neural networks</topic><topic>Normal distribution</topic><topic>Operators (mathematics)</topic><topic>Random noise</topic><toplevel>online_resources</toplevel><creatorcontrib>Assaf Shocher</creatorcontrib><creatorcontrib>Dravid, Amil</creatorcontrib><creatorcontrib>Gandelsman, Yossi</creatorcontrib><creatorcontrib>Mosseri, Inbar</creatorcontrib><creatorcontrib>Rubinstein, Michael</creatorcontrib><creatorcontrib>Efros, Alexei A</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Assaf Shocher</au><au>Dravid, Amil</au><au>Gandelsman, Yossi</au><au>Mosseri, Inbar</au><au>Rubinstein, Michael</au><au>Efros, Alexei A</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Idempotent Generative Network</atitle><jtitle>arXiv.org</jtitle><date>2023-11-02</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>We propose a new approach for generative modeling based on training a neural network to be idempotent. An idempotent operator is one that can be applied sequentially without changing the result beyond the initial application, namely \(f(f(z))=f(z)\). The proposed model \(f\) is trained to map a source distribution (e.g, Gaussian noise) to a target distribution (e.g. realistic images) using the following objectives: (1) Instances from the target distribution should map to themselves, namely \(f(x)=x\). We define the target manifold as the set of all instances that \(f\) maps to themselves. (2) Instances that form the source distribution should map onto the defined target manifold. This is achieved by optimizing the idempotence term, \(f(f(z))=f(z)\) which encourages the range of \(f(z)\) to be on the target manifold. Under ideal assumptions such a process provably converges to the target distribution. This strategy results in a model capable of generating an output in one step, maintaining a consistent latent space, while also allowing sequential applications for refinement. Additionally, we find that by processing inputs from both target and source distributions, the model adeptly projects corrupted or modified data back to the target manifold. This work is a first step towards a ``global projector'' that enables projecting any input into a target data distribution.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_2885672157
source Freely Accessible Journals
subjects Manifolds
Neural networks
Normal distribution
Operators (mathematics)
Random noise
title Idempotent Generative Network
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-11T10%3A15%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Idempotent%20Generative%20Network&rft.jtitle=arXiv.org&rft.au=Assaf%20Shocher&rft.date=2023-11-02&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2885672157%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2885672157&rft_id=info:pmid/&rfr_iscdi=true