Semantic Hierarchy Emerges in Deep Generative Representations for Scene Synthesis
Despite the great success of Generative Adversarial Networks (GANs) in synthesizing images, there lacks enough understanding of how photo-realistic images are generated from the layer-wise stochastic latent codes introduced in recent GANs. In this work, we show that highly-structured semantic hierar...
Gespeichert in:
Veröffentlicht in: | International journal of computer vision 2021-05, Vol.129 (5), p.1451-1466 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1466 |
---|---|
container_issue | 5 |
container_start_page | 1451 |
container_title | International journal of computer vision |
container_volume | 129 |
creator | Yang, Ceyuan Shen, Yujun Zhou, Bolei |
description | Despite the great success of Generative Adversarial Networks (GANs) in synthesizing images, there lacks enough understanding of how photo-realistic images are generated from the layer-wise stochastic latent codes introduced in recent GANs. In this work, we show that highly-structured semantic hierarchy emerges in the deep generative representations from the state-of-the-art GANs like StyleGAN and BigGAN, trained for scene synthesis. By probing the per-layer representation with a broad set of semantics at different abstraction levels, we manage to quantify the causality between the layer-wise activations and the semantics occurring in the output image. Such a quantification identifies the human-understandable variation factors that can be further used to steer the generation process, such as changing the lighting condition and varying the viewpoint of the scene. Extensive qualitative and quantitative results suggest that the generative representations learned by the GANs with layer-wise latent codes are specialized to synthesize various concepts in a hierarchical manner: the early layers tend to determine the spatial layout, the middle layers control the categorical objects, and the later layers render the scene attributes as well as the color scheme. Identifying such a set of steerable variation factors facilitates high-fidelity scene editing based on well-learned GAN models without any retraining (code and demo video are available at
https://genforce.github.io/higan
). |
doi_str_mv | 10.1007/s11263-020-01429-5 |
format | Article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2522236753</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A660897775</galeid><sourcerecordid>A660897775</sourcerecordid><originalsourceid>FETCH-LOGICAL-c502t-109e944d437febe71cd0c3a8f0f14cad559d1ca577fa2d2a8703b7ad817879d03</originalsourceid><addsrcrecordid>eNp9kU9LBCEYhyUK2ra-QCehU4epVx3HmWP0H4KorbOY87oZrbOpG-23z5oguoQH0d_zvAo_QvYZHDEAdZwY442ogEMFrOZdJTfIhEklKlaD3CQT6Eokm45tk52UXgCAt1xMyN0MFyZkb-mVx2iifV7T8wXGOSbqAz1DXNJLDCXK_h3pPS4jJgy5HIeQqBsindmS09k65GdMPu2SLWdeE-797FPyeHH-cHpV3dxeXp-e3FRWAs8Vgw67uu5roRw-oWK2BytM68Cx2ppeyq5n1kilnOE9N60C8aRM3zLVqq4HMSUH49xlHN5WmLJ-GVYxlCc1l5xz0SgpCnU0UnPzitoHN-RobFk9LrwdAjpf7k-aBtpOqWJMyeEfoTAZP_LcrFLS17P7vywfWRuHlCI6vYx-YeJaM9BfveixF1160d-96C9JjFIqcJhj_P33P9Ynf3ePKw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2522236753</pqid></control><display><type>article</type><title>Semantic Hierarchy Emerges in Deep Generative Representations for Scene Synthesis</title><source>Springer Nature - Complete Springer Journals</source><creator>Yang, Ceyuan ; Shen, Yujun ; Zhou, Bolei</creator><creatorcontrib>Yang, Ceyuan ; Shen, Yujun ; Zhou, Bolei</creatorcontrib><description>Despite the great success of Generative Adversarial Networks (GANs) in synthesizing images, there lacks enough understanding of how photo-realistic images are generated from the layer-wise stochastic latent codes introduced in recent GANs. In this work, we show that highly-structured semantic hierarchy emerges in the deep generative representations from the state-of-the-art GANs like StyleGAN and BigGAN, trained for scene synthesis. By probing the per-layer representation with a broad set of semantics at different abstraction levels, we manage to quantify the causality between the layer-wise activations and the semantics occurring in the output image. Such a quantification identifies the human-understandable variation factors that can be further used to steer the generation process, such as changing the lighting condition and varying the viewpoint of the scene. Extensive qualitative and quantitative results suggest that the generative representations learned by the GANs with layer-wise latent codes are specialized to synthesize various concepts in a hierarchical manner: the early layers tend to determine the spatial layout, the middle layers control the categorical objects, and the later layers render the scene attributes as well as the color scheme. Identifying such a set of steerable variation factors facilitates high-fidelity scene editing based on well-learned GAN models without any retraining (code and demo video are available at
https://genforce.github.io/higan
).</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-020-01429-5</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Computer Imaging ; Computer Science ; Generative adversarial networks ; Image Processing and Computer Vision ; Pattern Recognition ; Pattern Recognition and Graphics ; Representations ; Semantics ; Synthesis ; Vision</subject><ispartof>International journal of computer vision, 2021-05, Vol.129 (5), p.1451-1466</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021</rights><rights>COPYRIGHT 2021 Springer</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c502t-109e944d437febe71cd0c3a8f0f14cad559d1ca577fa2d2a8703b7ad817879d03</citedby><cites>FETCH-LOGICAL-c502t-109e944d437febe71cd0c3a8f0f14cad559d1ca577fa2d2a8703b7ad817879d03</cites><orcidid>0000-0003-1417-1938 ; 0000-0003-4030-0684</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-020-01429-5$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-020-01429-5$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27903,27904,41467,42536,51297</link.rule.ids></links><search><creatorcontrib>Yang, Ceyuan</creatorcontrib><creatorcontrib>Shen, Yujun</creatorcontrib><creatorcontrib>Zhou, Bolei</creatorcontrib><title>Semantic Hierarchy Emerges in Deep Generative Representations for Scene Synthesis</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>Despite the great success of Generative Adversarial Networks (GANs) in synthesizing images, there lacks enough understanding of how photo-realistic images are generated from the layer-wise stochastic latent codes introduced in recent GANs. In this work, we show that highly-structured semantic hierarchy emerges in the deep generative representations from the state-of-the-art GANs like StyleGAN and BigGAN, trained for scene synthesis. By probing the per-layer representation with a broad set of semantics at different abstraction levels, we manage to quantify the causality between the layer-wise activations and the semantics occurring in the output image. Such a quantification identifies the human-understandable variation factors that can be further used to steer the generation process, such as changing the lighting condition and varying the viewpoint of the scene. Extensive qualitative and quantitative results suggest that the generative representations learned by the GANs with layer-wise latent codes are specialized to synthesize various concepts in a hierarchical manner: the early layers tend to determine the spatial layout, the middle layers control the categorical objects, and the later layers render the scene attributes as well as the color scheme. Identifying such a set of steerable variation factors facilitates high-fidelity scene editing based on well-learned GAN models without any retraining (code and demo video are available at
https://genforce.github.io/higan
).</description><subject>Artificial Intelligence</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Generative adversarial networks</subject><subject>Image Processing and Computer Vision</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Representations</subject><subject>Semantics</subject><subject>Synthesis</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kU9LBCEYhyUK2ra-QCehU4epVx3HmWP0H4KorbOY87oZrbOpG-23z5oguoQH0d_zvAo_QvYZHDEAdZwY442ogEMFrOZdJTfIhEklKlaD3CQT6Eokm45tk52UXgCAt1xMyN0MFyZkb-mVx2iifV7T8wXGOSbqAz1DXNJLDCXK_h3pPS4jJgy5HIeQqBsindmS09k65GdMPu2SLWdeE-797FPyeHH-cHpV3dxeXp-e3FRWAs8Vgw67uu5roRw-oWK2BytM68Cx2ppeyq5n1kilnOE9N60C8aRM3zLVqq4HMSUH49xlHN5WmLJ-GVYxlCc1l5xz0SgpCnU0UnPzitoHN-RobFk9LrwdAjpf7k-aBtpOqWJMyeEfoTAZP_LcrFLS17P7vywfWRuHlCI6vYx-YeJaM9BfveixF1160d-96C9JjFIqcJhj_P33P9Ynf3ePKw</recordid><startdate>20210501</startdate><enddate>20210501</enddate><creator>Yang, Ceyuan</creator><creator>Shen, Yujun</creator><creator>Zhou, Bolei</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PYYUZ</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0003-1417-1938</orcidid><orcidid>https://orcid.org/0000-0003-4030-0684</orcidid></search><sort><creationdate>20210501</creationdate><title>Semantic Hierarchy Emerges in Deep Generative Representations for Scene Synthesis</title><author>Yang, Ceyuan ; Shen, Yujun ; Zhou, Bolei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c502t-109e944d437febe71cd0c3a8f0f14cad559d1ca577fa2d2a8703b7ad817879d03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial Intelligence</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Generative adversarial networks</topic><topic>Image Processing and Computer Vision</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Representations</topic><topic>Semantics</topic><topic>Synthesis</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yang, Ceyuan</creatorcontrib><creatorcontrib>Shen, Yujun</creatorcontrib><creatorcontrib>Zhou, Bolei</creatorcontrib><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yang, Ceyuan</au><au>Shen, Yujun</au><au>Zhou, Bolei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Semantic Hierarchy Emerges in Deep Generative Representations for Scene Synthesis</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2021-05-01</date><risdate>2021</risdate><volume>129</volume><issue>5</issue><spage>1451</spage><epage>1466</epage><pages>1451-1466</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>Despite the great success of Generative Adversarial Networks (GANs) in synthesizing images, there lacks enough understanding of how photo-realistic images are generated from the layer-wise stochastic latent codes introduced in recent GANs. In this work, we show that highly-structured semantic hierarchy emerges in the deep generative representations from the state-of-the-art GANs like StyleGAN and BigGAN, trained for scene synthesis. By probing the per-layer representation with a broad set of semantics at different abstraction levels, we manage to quantify the causality between the layer-wise activations and the semantics occurring in the output image. Such a quantification identifies the human-understandable variation factors that can be further used to steer the generation process, such as changing the lighting condition and varying the viewpoint of the scene. Extensive qualitative and quantitative results suggest that the generative representations learned by the GANs with layer-wise latent codes are specialized to synthesize various concepts in a hierarchical manner: the early layers tend to determine the spatial layout, the middle layers control the categorical objects, and the later layers render the scene attributes as well as the color scheme. Identifying such a set of steerable variation factors facilitates high-fidelity scene editing based on well-learned GAN models without any retraining (code and demo video are available at
https://genforce.github.io/higan
).</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-020-01429-5</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0003-1417-1938</orcidid><orcidid>https://orcid.org/0000-0003-4030-0684</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0920-5691 |
ispartof | International journal of computer vision, 2021-05, Vol.129 (5), p.1451-1466 |
issn | 0920-5691 1573-1405 |
language | eng |
recordid | cdi_proquest_journals_2522236753 |
source | Springer Nature - Complete Springer Journals |
subjects | Artificial Intelligence Computer Imaging Computer Science Generative adversarial networks Image Processing and Computer Vision Pattern Recognition Pattern Recognition and Graphics Representations Semantics Synthesis Vision |
title | Semantic Hierarchy Emerges in Deep Generative Representations for Scene Synthesis |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T17%3A17%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Semantic%20Hierarchy%20Emerges%20in%20Deep%20Generative%20Representations%20for%20Scene%20Synthesis&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Yang,%20Ceyuan&rft.date=2021-05-01&rft.volume=129&rft.issue=5&rft.spage=1451&rft.epage=1466&rft.pages=1451-1466&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-020-01429-5&rft_dat=%3Cgale_proqu%3EA660897775%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2522236753&rft_id=info:pmid/&rft_galeid=A660897775&rfr_iscdi=true |