State‐of‐the‐Art in the Architecture, Methods and Applications of StyleGAN

Generative Adversarial Networks (GANs) have established themselves as a prevalent approach to image synthesis. Of these, StyleGAN offers a fascinating case study, owing to its remarkable visual quality and an ability to support a large array of downstream tasks. This state‐of‐the‐art report covers t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer graphics forum 2022-05, Vol.41 (2), p.591-611
Hauptverfasser: Bermano, A.H., Gal, R., Alaluf, Y., Mokady, R., Nitzan, Y., Tov, O., Patashnik, O., Cohen‐Or, D.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 611
container_issue 2
container_start_page 591
container_title Computer graphics forum
container_volume 41
creator Bermano, A.H.
Gal, R.
Alaluf, Y.
Mokady, R.
Nitzan, Y.
Tov, O.
Patashnik, O.
Cohen‐Or, D.
description Generative Adversarial Networks (GANs) have established themselves as a prevalent approach to image synthesis. Of these, StyleGAN offers a fascinating case study, owing to its remarkable visual quality and an ability to support a large array of downstream tasks. This state‐of‐the‐art report covers the StyleGAN architecture, and the ways it has been employed since its conception, while also analyzing its severe limitations. It aims to be of use for both newcomers, who wish to get a grasp of the field, and for more experienced readers that might benefit from seeing current research trends and existing tools laid out. Among StyleGAN's most interesting aspects is its learned latent space. Despite being learned with no supervision, it is surprisingly well‐behaved and remarkably disentangled. Combined with StyleGAN's visual quality, these properties gave rise to unparalleled editing capabilities. However, the control offered by StyleGAN is inherently limited to the generator's learned distribution, and can only be applied to images generated by StyleGAN itself. Seeking to bring StyleGAN's latent control to real‐world scenarios, the study of GAN inversion and latent space embedding has quickly gained in popularity. Meanwhile, this same study has helped shed light on the inner workings and limitations of StyleGAN. We map out StyleGAN's impressive story through these investigations, and discuss the details that have made StyleGAN the go‐to generator. We further elaborate on the visual priors StyleGAN constructs, and discuss their use in downstream discriminative tasks. Looking forward, we point out StyleGAN's limitations and speculate on current trends and promising directions for future research, such as task and target specific fine‐tuning.
doi_str_mv 10.1111/cgf.14503
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2668624846</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2668624846</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3633-52a7bcfb9bb739682740e657059b2e5188da8aaec9636e6b373b1e7f3f07fff53</originalsourceid><addsrcrecordid>eNp1kMtKAzEUhoMoWKsL3yDgSnDaZDJJZpZDsVWoF6iuQ5Imdso4GZMU6c5H8Bl9EqPj1rM45z_wnQs_AOcYTXCKqX6xE1xQRA7ACBeMZyWj1SEYIZw0R5Qeg5MQtgihgjM6Ao-rKKP5-vh0NqW4-ZG1j7DpYGpg7fWmiUbHnTdX8M7EjVsHKLs1rPu-bbSMjesCdBau4r41i_r-FBxZ2QZz9lfH4Hl-_TS7yZYPi9tZvcw0YYRkNJdcaasqpTipWJnzAhlG04eVyg3FZbmWpZRGV4wwwxThRGHDLbGIW2spGYOLYW_v3dvOhCi2bue7dFLkjJUsL8qCJepyoLR3IXhjRe-bV-n3AiPxY5hIholfwxI7Hdj3pjX7_0ExW8yHiW_G7W5K</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2668624846</pqid></control><display><type>article</type><title>State‐of‐the‐Art in the Architecture, Methods and Applications of StyleGAN</title><source>Wiley Online Library Journals Frontfile Complete</source><source>Business Source Complete</source><creator>Bermano, A.H. ; Gal, R. ; Alaluf, Y. ; Mokady, R. ; Nitzan, Y. ; Tov, O. ; Patashnik, O. ; Cohen‐Or, D.</creator><creatorcontrib>Bermano, A.H. ; Gal, R. ; Alaluf, Y. ; Mokady, R. ; Nitzan, Y. ; Tov, O. ; Patashnik, O. ; Cohen‐Or, D.</creatorcontrib><description>Generative Adversarial Networks (GANs) have established themselves as a prevalent approach to image synthesis. Of these, StyleGAN offers a fascinating case study, owing to its remarkable visual quality and an ability to support a large array of downstream tasks. This state‐of‐the‐art report covers the StyleGAN architecture, and the ways it has been employed since its conception, while also analyzing its severe limitations. It aims to be of use for both newcomers, who wish to get a grasp of the field, and for more experienced readers that might benefit from seeing current research trends and existing tools laid out. Among StyleGAN's most interesting aspects is its learned latent space. Despite being learned with no supervision, it is surprisingly well‐behaved and remarkably disentangled. Combined with StyleGAN's visual quality, these properties gave rise to unparalleled editing capabilities. However, the control offered by StyleGAN is inherently limited to the generator's learned distribution, and can only be applied to images generated by StyleGAN itself. Seeking to bring StyleGAN's latent control to real‐world scenarios, the study of GAN inversion and latent space embedding has quickly gained in popularity. Meanwhile, this same study has helped shed light on the inner workings and limitations of StyleGAN. We map out StyleGAN's impressive story through these investigations, and discuss the details that have made StyleGAN the go‐to generator. We further elaborate on the visual priors StyleGAN constructs, and discuss their use in downstream discriminative tasks. Looking forward, we point out StyleGAN's limitations and speculate on current trends and promising directions for future research, such as task and target specific fine‐tuning.</description><identifier>ISSN: 0167-7055</identifier><identifier>EISSN: 1467-8659</identifier><identifier>DOI: 10.1111/cgf.14503</identifier><language>eng</language><publisher>Oxford: Blackwell Publishing Ltd</publisher><subject>CCS Concepts ; Computer graphics ; Computing methodologies → Learning latent representations ; Generative adversarial networks ; Image manipulation ; Neural networks ; Trends ; Visual discrimination</subject><ispartof>Computer graphics forum, 2022-05, Vol.41 (2), p.591-611</ispartof><rights>2022 The Author(s) Computer Graphics Forum © 2022 The Eurographics Association and John Wiley &amp; Sons Ltd. Published by John Wiley &amp; Sons Ltd.</rights><rights>2022 The Eurographics Association and John Wiley &amp; Sons Ltd.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3633-52a7bcfb9bb739682740e657059b2e5188da8aaec9636e6b373b1e7f3f07fff53</citedby><cites>FETCH-LOGICAL-c3633-52a7bcfb9bb739682740e657059b2e5188da8aaec9636e6b373b1e7f3f07fff53</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1111%2Fcgf.14503$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1111%2Fcgf.14503$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,776,780,1411,27901,27902,45550,45551</link.rule.ids></links><search><creatorcontrib>Bermano, A.H.</creatorcontrib><creatorcontrib>Gal, R.</creatorcontrib><creatorcontrib>Alaluf, Y.</creatorcontrib><creatorcontrib>Mokady, R.</creatorcontrib><creatorcontrib>Nitzan, Y.</creatorcontrib><creatorcontrib>Tov, O.</creatorcontrib><creatorcontrib>Patashnik, O.</creatorcontrib><creatorcontrib>Cohen‐Or, D.</creatorcontrib><title>State‐of‐the‐Art in the Architecture, Methods and Applications of StyleGAN</title><title>Computer graphics forum</title><description>Generative Adversarial Networks (GANs) have established themselves as a prevalent approach to image synthesis. Of these, StyleGAN offers a fascinating case study, owing to its remarkable visual quality and an ability to support a large array of downstream tasks. This state‐of‐the‐art report covers the StyleGAN architecture, and the ways it has been employed since its conception, while also analyzing its severe limitations. It aims to be of use for both newcomers, who wish to get a grasp of the field, and for more experienced readers that might benefit from seeing current research trends and existing tools laid out. Among StyleGAN's most interesting aspects is its learned latent space. Despite being learned with no supervision, it is surprisingly well‐behaved and remarkably disentangled. Combined with StyleGAN's visual quality, these properties gave rise to unparalleled editing capabilities. However, the control offered by StyleGAN is inherently limited to the generator's learned distribution, and can only be applied to images generated by StyleGAN itself. Seeking to bring StyleGAN's latent control to real‐world scenarios, the study of GAN inversion and latent space embedding has quickly gained in popularity. Meanwhile, this same study has helped shed light on the inner workings and limitations of StyleGAN. We map out StyleGAN's impressive story through these investigations, and discuss the details that have made StyleGAN the go‐to generator. We further elaborate on the visual priors StyleGAN constructs, and discuss their use in downstream discriminative tasks. Looking forward, we point out StyleGAN's limitations and speculate on current trends and promising directions for future research, such as task and target specific fine‐tuning.</description><subject>CCS Concepts</subject><subject>Computer graphics</subject><subject>Computing methodologies → Learning latent representations</subject><subject>Generative adversarial networks</subject><subject>Image manipulation</subject><subject>Neural networks</subject><subject>Trends</subject><subject>Visual discrimination</subject><issn>0167-7055</issn><issn>1467-8659</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp1kMtKAzEUhoMoWKsL3yDgSnDaZDJJZpZDsVWoF6iuQ5Imdso4GZMU6c5H8Bl9EqPj1rM45z_wnQs_AOcYTXCKqX6xE1xQRA7ACBeMZyWj1SEYIZw0R5Qeg5MQtgihgjM6Ao-rKKP5-vh0NqW4-ZG1j7DpYGpg7fWmiUbHnTdX8M7EjVsHKLs1rPu-bbSMjesCdBau4r41i_r-FBxZ2QZz9lfH4Hl-_TS7yZYPi9tZvcw0YYRkNJdcaasqpTipWJnzAhlG04eVyg3FZbmWpZRGV4wwwxThRGHDLbGIW2spGYOLYW_v3dvOhCi2bue7dFLkjJUsL8qCJepyoLR3IXhjRe-bV-n3AiPxY5hIholfwxI7Hdj3pjX7_0ExW8yHiW_G7W5K</recordid><startdate>202205</startdate><enddate>202205</enddate><creator>Bermano, A.H.</creator><creator>Gal, R.</creator><creator>Alaluf, Y.</creator><creator>Mokady, R.</creator><creator>Nitzan, Y.</creator><creator>Tov, O.</creator><creator>Patashnik, O.</creator><creator>Cohen‐Or, D.</creator><general>Blackwell Publishing Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>202205</creationdate><title>State‐of‐the‐Art in the Architecture, Methods and Applications of StyleGAN</title><author>Bermano, A.H. ; Gal, R. ; Alaluf, Y. ; Mokady, R. ; Nitzan, Y. ; Tov, O. ; Patashnik, O. ; Cohen‐Or, D.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3633-52a7bcfb9bb739682740e657059b2e5188da8aaec9636e6b373b1e7f3f07fff53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>CCS Concepts</topic><topic>Computer graphics</topic><topic>Computing methodologies → Learning latent representations</topic><topic>Generative adversarial networks</topic><topic>Image manipulation</topic><topic>Neural networks</topic><topic>Trends</topic><topic>Visual discrimination</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Bermano, A.H.</creatorcontrib><creatorcontrib>Gal, R.</creatorcontrib><creatorcontrib>Alaluf, Y.</creatorcontrib><creatorcontrib>Mokady, R.</creatorcontrib><creatorcontrib>Nitzan, Y.</creatorcontrib><creatorcontrib>Tov, O.</creatorcontrib><creatorcontrib>Patashnik, O.</creatorcontrib><creatorcontrib>Cohen‐Or, D.</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Computer graphics forum</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bermano, A.H.</au><au>Gal, R.</au><au>Alaluf, Y.</au><au>Mokady, R.</au><au>Nitzan, Y.</au><au>Tov, O.</au><au>Patashnik, O.</au><au>Cohen‐Or, D.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>State‐of‐the‐Art in the Architecture, Methods and Applications of StyleGAN</atitle><jtitle>Computer graphics forum</jtitle><date>2022-05</date><risdate>2022</risdate><volume>41</volume><issue>2</issue><spage>591</spage><epage>611</epage><pages>591-611</pages><issn>0167-7055</issn><eissn>1467-8659</eissn><abstract>Generative Adversarial Networks (GANs) have established themselves as a prevalent approach to image synthesis. Of these, StyleGAN offers a fascinating case study, owing to its remarkable visual quality and an ability to support a large array of downstream tasks. This state‐of‐the‐art report covers the StyleGAN architecture, and the ways it has been employed since its conception, while also analyzing its severe limitations. It aims to be of use for both newcomers, who wish to get a grasp of the field, and for more experienced readers that might benefit from seeing current research trends and existing tools laid out. Among StyleGAN's most interesting aspects is its learned latent space. Despite being learned with no supervision, it is surprisingly well‐behaved and remarkably disentangled. Combined with StyleGAN's visual quality, these properties gave rise to unparalleled editing capabilities. However, the control offered by StyleGAN is inherently limited to the generator's learned distribution, and can only be applied to images generated by StyleGAN itself. Seeking to bring StyleGAN's latent control to real‐world scenarios, the study of GAN inversion and latent space embedding has quickly gained in popularity. Meanwhile, this same study has helped shed light on the inner workings and limitations of StyleGAN. We map out StyleGAN's impressive story through these investigations, and discuss the details that have made StyleGAN the go‐to generator. We further elaborate on the visual priors StyleGAN constructs, and discuss their use in downstream discriminative tasks. Looking forward, we point out StyleGAN's limitations and speculate on current trends and promising directions for future research, such as task and target specific fine‐tuning.</abstract><cop>Oxford</cop><pub>Blackwell Publishing Ltd</pub><doi>10.1111/cgf.14503</doi><tpages>21</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0167-7055
ispartof Computer graphics forum, 2022-05, Vol.41 (2), p.591-611
issn 0167-7055
1467-8659
language eng
recordid cdi_proquest_journals_2668624846
source Wiley Online Library Journals Frontfile Complete; Business Source Complete
subjects CCS Concepts
Computer graphics
Computing methodologies → Learning latent representations
Generative adversarial networks
Image manipulation
Neural networks
Trends
Visual discrimination
title State‐of‐the‐Art in the Architecture, Methods and Applications of StyleGAN
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T19%3A21%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=State%E2%80%90of%E2%80%90the%E2%80%90Art%20in%20the%20Architecture,%20Methods%20and%20Applications%20of%20StyleGAN&rft.jtitle=Computer%20graphics%20forum&rft.au=Bermano,%20A.H.&rft.date=2022-05&rft.volume=41&rft.issue=2&rft.spage=591&rft.epage=611&rft.pages=591-611&rft.issn=0167-7055&rft.eissn=1467-8659&rft_id=info:doi/10.1111/cgf.14503&rft_dat=%3Cproquest_cross%3E2668624846%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2668624846&rft_id=info:pmid/&rfr_iscdi=true