COCO-GAN: Generation by Parts via Conditional Coordinating

Humans can only interact with part of the surrounding environment due to biological restrictions. Therefore, we learn to reason the spatial relationships across a series of observations to piece together the surrounding environment. Inspired by such behavior and the fact that machines also have comp...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lin, Chieh Hubert, Chang, Chia-Che, Chen, Yu-Sheng, Juan, Da-Cheng, Wei, Wei, Chen, Hwann-Tzong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lin, Chieh Hubert
Chang, Chia-Che
Chen, Yu-Sheng
Juan, Da-Cheng
Wei, Wei
Chen, Hwann-Tzong
description Humans can only interact with part of the surrounding environment due to biological restrictions. Therefore, we learn to reason the spatial relationships across a series of observations to piece together the surrounding environment. Inspired by such behavior and the fact that machines also have computational constraints, we propose \underline{CO}nditional \underline{CO}ordinate GAN (COCO-GAN) of which the generator generates images by parts based on their spatial coordinates as the condition. On the other hand, the discriminator learns to justify realism across multiple assembled patches by global coherence, local appearance, and edge-crossing continuity. Despite the full images are never generated during training, we show that COCO-GAN can produce \textbf{state-of-the-art-quality} full images during inference. We further demonstrate a variety of novel applications enabled by teaching the network to be aware of coordinates. First, we perform extrapolation to the learned coordinate manifold and generate off-the-boundary patches. Combining with the originally generated full image, COCO-GAN can produce images that are larger than training samples, which we called "beyond-boundary generation". We then showcase panorama generation within a cylindrical coordinate system that inherently preserves horizontally cyclic topology. On the computation side, COCO-GAN has a built-in divide-and-conquer paradigm that reduces memory requisition during training and inference, provides high-parallelism, and can generate parts of images on-demand.
doi_str_mv 10.48550/arxiv.1904.00284
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1904_00284</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1904_00284</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-33e07d4848024d09da077ea6001b0d514f4ff617f8eed7e996401156fcce1b323</originalsourceid><addsrcrecordid>eNotj8FOwzAQRH3hgAofwAn_QMI63thOb5UFAakiHHqPNvUaWSoOcquK_j1t4TSj0Wg0T4gHBTW6toUnKj_pWKsOsAZoHN6KpR_8UPWr96XsOXOhQ5qznE7yg8phL4-JpJ9zSJeYdmc_l5DyuZU_78RNpN2e7_91ITYvzxv_Wq2H_s2v1hUZi5XWDDagQwcNBugCgbVMBkBNEFqFEWM0ykbHHCx3nUFQqjVxu2U16UYvxOPf7PX8-F3SF5XTeIEYrxD6FytsP_0</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>COCO-GAN: Generation by Parts via Conditional Coordinating</title><source>arXiv.org</source><creator>Lin, Chieh Hubert ; Chang, Chia-Che ; Chen, Yu-Sheng ; Juan, Da-Cheng ; Wei, Wei ; Chen, Hwann-Tzong</creator><creatorcontrib>Lin, Chieh Hubert ; Chang, Chia-Che ; Chen, Yu-Sheng ; Juan, Da-Cheng ; Wei, Wei ; Chen, Hwann-Tzong</creatorcontrib><description>Humans can only interact with part of the surrounding environment due to biological restrictions. Therefore, we learn to reason the spatial relationships across a series of observations to piece together the surrounding environment. Inspired by such behavior and the fact that machines also have computational constraints, we propose \underline{CO}nditional \underline{CO}ordinate GAN (COCO-GAN) of which the generator generates images by parts based on their spatial coordinates as the condition. On the other hand, the discriminator learns to justify realism across multiple assembled patches by global coherence, local appearance, and edge-crossing continuity. Despite the full images are never generated during training, we show that COCO-GAN can produce \textbf{state-of-the-art-quality} full images during inference. We further demonstrate a variety of novel applications enabled by teaching the network to be aware of coordinates. First, we perform extrapolation to the learned coordinate manifold and generate off-the-boundary patches. Combining with the originally generated full image, COCO-GAN can produce images that are larger than training samples, which we called "beyond-boundary generation". We then showcase panorama generation within a cylindrical coordinate system that inherently preserves horizontally cyclic topology. On the computation side, COCO-GAN has a built-in divide-and-conquer paradigm that reduces memory requisition during training and inference, provides high-parallelism, and can generate parts of images on-demand.</description><identifier>DOI: 10.48550/arxiv.1904.00284</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2019-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1904.00284$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1904.00284$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lin, Chieh Hubert</creatorcontrib><creatorcontrib>Chang, Chia-Che</creatorcontrib><creatorcontrib>Chen, Yu-Sheng</creatorcontrib><creatorcontrib>Juan, Da-Cheng</creatorcontrib><creatorcontrib>Wei, Wei</creatorcontrib><creatorcontrib>Chen, Hwann-Tzong</creatorcontrib><title>COCO-GAN: Generation by Parts via Conditional Coordinating</title><description>Humans can only interact with part of the surrounding environment due to biological restrictions. Therefore, we learn to reason the spatial relationships across a series of observations to piece together the surrounding environment. Inspired by such behavior and the fact that machines also have computational constraints, we propose \underline{CO}nditional \underline{CO}ordinate GAN (COCO-GAN) of which the generator generates images by parts based on their spatial coordinates as the condition. On the other hand, the discriminator learns to justify realism across multiple assembled patches by global coherence, local appearance, and edge-crossing continuity. Despite the full images are never generated during training, we show that COCO-GAN can produce \textbf{state-of-the-art-quality} full images during inference. We further demonstrate a variety of novel applications enabled by teaching the network to be aware of coordinates. First, we perform extrapolation to the learned coordinate manifold and generate off-the-boundary patches. Combining with the originally generated full image, COCO-GAN can produce images that are larger than training samples, which we called "beyond-boundary generation". We then showcase panorama generation within a cylindrical coordinate system that inherently preserves horizontally cyclic topology. On the computation side, COCO-GAN has a built-in divide-and-conquer paradigm that reduces memory requisition during training and inference, provides high-parallelism, and can generate parts of images on-demand.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FOwzAQRH3hgAofwAn_QMI63thOb5UFAakiHHqPNvUaWSoOcquK_j1t4TSj0Wg0T4gHBTW6toUnKj_pWKsOsAZoHN6KpR_8UPWr96XsOXOhQ5qznE7yg8phL4-JpJ9zSJeYdmc_l5DyuZU_78RNpN2e7_91ITYvzxv_Wq2H_s2v1hUZi5XWDDagQwcNBugCgbVMBkBNEFqFEWM0ykbHHCx3nUFQqjVxu2U16UYvxOPf7PX8-F3SF5XTeIEYrxD6FytsP_0</recordid><startdate>20190330</startdate><enddate>20190330</enddate><creator>Lin, Chieh Hubert</creator><creator>Chang, Chia-Che</creator><creator>Chen, Yu-Sheng</creator><creator>Juan, Da-Cheng</creator><creator>Wei, Wei</creator><creator>Chen, Hwann-Tzong</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20190330</creationdate><title>COCO-GAN: Generation by Parts via Conditional Coordinating</title><author>Lin, Chieh Hubert ; Chang, Chia-Che ; Chen, Yu-Sheng ; Juan, Da-Cheng ; Wei, Wei ; Chen, Hwann-Tzong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-33e07d4848024d09da077ea6001b0d514f4ff617f8eed7e996401156fcce1b323</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Lin, Chieh Hubert</creatorcontrib><creatorcontrib>Chang, Chia-Che</creatorcontrib><creatorcontrib>Chen, Yu-Sheng</creatorcontrib><creatorcontrib>Juan, Da-Cheng</creatorcontrib><creatorcontrib>Wei, Wei</creatorcontrib><creatorcontrib>Chen, Hwann-Tzong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lin, Chieh Hubert</au><au>Chang, Chia-Che</au><au>Chen, Yu-Sheng</au><au>Juan, Da-Cheng</au><au>Wei, Wei</au><au>Chen, Hwann-Tzong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>COCO-GAN: Generation by Parts via Conditional Coordinating</atitle><date>2019-03-30</date><risdate>2019</risdate><abstract>Humans can only interact with part of the surrounding environment due to biological restrictions. Therefore, we learn to reason the spatial relationships across a series of observations to piece together the surrounding environment. Inspired by such behavior and the fact that machines also have computational constraints, we propose \underline{CO}nditional \underline{CO}ordinate GAN (COCO-GAN) of which the generator generates images by parts based on their spatial coordinates as the condition. On the other hand, the discriminator learns to justify realism across multiple assembled patches by global coherence, local appearance, and edge-crossing continuity. Despite the full images are never generated during training, we show that COCO-GAN can produce \textbf{state-of-the-art-quality} full images during inference. We further demonstrate a variety of novel applications enabled by teaching the network to be aware of coordinates. First, we perform extrapolation to the learned coordinate manifold and generate off-the-boundary patches. Combining with the originally generated full image, COCO-GAN can produce images that are larger than training samples, which we called "beyond-boundary generation". We then showcase panorama generation within a cylindrical coordinate system that inherently preserves horizontally cyclic topology. On the computation side, COCO-GAN has a built-in divide-and-conquer paradigm that reduces memory requisition during training and inference, provides high-parallelism, and can generate parts of images on-demand.</abstract><doi>10.48550/arxiv.1904.00284</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1904.00284
ispartof
issn
language eng
recordid cdi_arxiv_primary_1904_00284
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
Statistics - Machine Learning
title COCO-GAN: Generation by Parts via Conditional Coordinating
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T00%3A45%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=COCO-GAN:%20Generation%20by%20Parts%20via%20Conditional%20Coordinating&rft.au=Lin,%20Chieh%20Hubert&rft.date=2019-03-30&rft_id=info:doi/10.48550/arxiv.1904.00284&rft_dat=%3Carxiv_GOX%3E1904_00284%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true