Non-Parametric Priors For Generative Adversarial Networks
International Conference on Machine Learning (2019) The advent of generative adversarial networks (GAN) has enabled new capabilities in synthesis, interpolation, and data augmentation heretofore considered very challenging. However, one of the common assumptions in most GAN architectures is the assu...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Singh, Rajhans Turaga, Pavan Jayasuriya, Suren Garg, Ravi Braun, Martin W |
description | International Conference on Machine Learning (2019) The advent of generative adversarial networks (GAN) has enabled new
capabilities in synthesis, interpolation, and data augmentation heretofore
considered very challenging. However, one of the common assumptions in most GAN
architectures is the assumption of simple parametric latent-space
distributions. While easy to implement, a simple latent-space distribution can
be problematic for uses such as interpolation. This is due to distributional
mismatches when samples are interpolated in the latent space. We present a
straightforward formalization of this problem; using basic results from
probability theory and off-the-shelf-optimization tools, we develop ways to
arrive at appropriate non-parametric priors. The obtained prior exhibits
unusual qualitative properties in terms of its shape, and quantitative benefits
in terms of lower divergence with its mid-point distribution. We demonstrate
that our designed prior helps improve image generation along any Euclidean
straight line during interpolation, both qualitatively and quantitatively,
without any additional training or architectural modifications. The proposed
formulation is quite flexible, paving the way to impose newer constraints on
the latent-space statistics. |
doi_str_mv | 10.48550/arxiv.1905.07061 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1905_07061</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1905_07061</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-32c1f4ff85cdc3cc4cb307b9a71c0065deb7e82e22780b3e5d8b1528be2b3ba03</originalsourceid><addsrcrecordid>eNotj8tqwzAQAHXpoST9gJ6iH7CzkixLPoaQRyEkPuRuVvIaRB4ua-O2f1-a9DS3YUaIdwV54a2FJfJ3mnJVgc3BQaleRXXs71mNjDcaOUVZc-p5kNue5Y7uxDimieSqnYgH5IRXeaTxq-fLMBcvHV4HevvnTJy3m_N6nx1Ou4_16pBh6VRmdFRd0XXexjaaGIsYDLhQoVMRoLQtBUdek9bOQzBkWx-U1T6QDiYgmJlYPLWP9uaT0w35p_l7aB4P5hcXfEHb</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Non-Parametric Priors For Generative Adversarial Networks</title><source>arXiv.org</source><creator>Singh, Rajhans ; Turaga, Pavan ; Jayasuriya, Suren ; Garg, Ravi ; Braun, Martin W</creator><creatorcontrib>Singh, Rajhans ; Turaga, Pavan ; Jayasuriya, Suren ; Garg, Ravi ; Braun, Martin W</creatorcontrib><description>International Conference on Machine Learning (2019) The advent of generative adversarial networks (GAN) has enabled new
capabilities in synthesis, interpolation, and data augmentation heretofore
considered very challenging. However, one of the common assumptions in most GAN
architectures is the assumption of simple parametric latent-space
distributions. While easy to implement, a simple latent-space distribution can
be problematic for uses such as interpolation. This is due to distributional
mismatches when samples are interpolated in the latent space. We present a
straightforward formalization of this problem; using basic results from
probability theory and off-the-shelf-optimization tools, we develop ways to
arrive at appropriate non-parametric priors. The obtained prior exhibits
unusual qualitative properties in terms of its shape, and quantitative benefits
in terms of lower divergence with its mid-point distribution. We demonstrate
that our designed prior helps improve image generation along any Euclidean
straight line during interpolation, both qualitatively and quantitatively,
without any additional training or architectural modifications. The proposed
formulation is quite flexible, paving the way to impose newer constraints on
the latent-space statistics.</description><identifier>DOI: 10.48550/arxiv.1905.07061</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2019-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1905.07061$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1905.07061$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Singh, Rajhans</creatorcontrib><creatorcontrib>Turaga, Pavan</creatorcontrib><creatorcontrib>Jayasuriya, Suren</creatorcontrib><creatorcontrib>Garg, Ravi</creatorcontrib><creatorcontrib>Braun, Martin W</creatorcontrib><title>Non-Parametric Priors For Generative Adversarial Networks</title><description>International Conference on Machine Learning (2019) The advent of generative adversarial networks (GAN) has enabled new
capabilities in synthesis, interpolation, and data augmentation heretofore
considered very challenging. However, one of the common assumptions in most GAN
architectures is the assumption of simple parametric latent-space
distributions. While easy to implement, a simple latent-space distribution can
be problematic for uses such as interpolation. This is due to distributional
mismatches when samples are interpolated in the latent space. We present a
straightforward formalization of this problem; using basic results from
probability theory and off-the-shelf-optimization tools, we develop ways to
arrive at appropriate non-parametric priors. The obtained prior exhibits
unusual qualitative properties in terms of its shape, and quantitative benefits
in terms of lower divergence with its mid-point distribution. We demonstrate
that our designed prior helps improve image generation along any Euclidean
straight line during interpolation, both qualitatively and quantitatively,
without any additional training or architectural modifications. The proposed
formulation is quite flexible, paving the way to impose newer constraints on
the latent-space statistics.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tqwzAQAHXpoST9gJ6iH7CzkixLPoaQRyEkPuRuVvIaRB4ua-O2f1-a9DS3YUaIdwV54a2FJfJ3mnJVgc3BQaleRXXs71mNjDcaOUVZc-p5kNue5Y7uxDimieSqnYgH5IRXeaTxq-fLMBcvHV4HevvnTJy3m_N6nx1Ou4_16pBh6VRmdFRd0XXexjaaGIsYDLhQoVMRoLQtBUdek9bOQzBkWx-U1T6QDiYgmJlYPLWP9uaT0w35p_l7aB4P5hcXfEHb</recordid><startdate>20190516</startdate><enddate>20190516</enddate><creator>Singh, Rajhans</creator><creator>Turaga, Pavan</creator><creator>Jayasuriya, Suren</creator><creator>Garg, Ravi</creator><creator>Braun, Martin W</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190516</creationdate><title>Non-Parametric Priors For Generative Adversarial Networks</title><author>Singh, Rajhans ; Turaga, Pavan ; Jayasuriya, Suren ; Garg, Ravi ; Braun, Martin W</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-32c1f4ff85cdc3cc4cb307b9a71c0065deb7e82e22780b3e5d8b1528be2b3ba03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Singh, Rajhans</creatorcontrib><creatorcontrib>Turaga, Pavan</creatorcontrib><creatorcontrib>Jayasuriya, Suren</creatorcontrib><creatorcontrib>Garg, Ravi</creatorcontrib><creatorcontrib>Braun, Martin W</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Singh, Rajhans</au><au>Turaga, Pavan</au><au>Jayasuriya, Suren</au><au>Garg, Ravi</au><au>Braun, Martin W</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Non-Parametric Priors For Generative Adversarial Networks</atitle><date>2019-05-16</date><risdate>2019</risdate><abstract>International Conference on Machine Learning (2019) The advent of generative adversarial networks (GAN) has enabled new
capabilities in synthesis, interpolation, and data augmentation heretofore
considered very challenging. However, one of the common assumptions in most GAN
architectures is the assumption of simple parametric latent-space
distributions. While easy to implement, a simple latent-space distribution can
be problematic for uses such as interpolation. This is due to distributional
mismatches when samples are interpolated in the latent space. We present a
straightforward formalization of this problem; using basic results from
probability theory and off-the-shelf-optimization tools, we develop ways to
arrive at appropriate non-parametric priors. The obtained prior exhibits
unusual qualitative properties in terms of its shape, and quantitative benefits
in terms of lower divergence with its mid-point distribution. We demonstrate
that our designed prior helps improve image generation along any Euclidean
straight line during interpolation, both qualitatively and quantitatively,
without any additional training or architectural modifications. The proposed
formulation is quite flexible, paving the way to impose newer constraints on
the latent-space statistics.</abstract><doi>10.48550/arxiv.1905.07061</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1905.07061 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1905_07061 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning |
title | Non-Parametric Priors For Generative Adversarial Networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T10%3A46%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Non-Parametric%20Priors%20For%20Generative%20Adversarial%20Networks&rft.au=Singh,%20Rajhans&rft.date=2019-05-16&rft_id=info:doi/10.48550/arxiv.1905.07061&rft_dat=%3Carxiv_GOX%3E1905_07061%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |