SAT3D: Image-driven Semantic Attribute Transfer in 3D

GAN-based image editing task aims at manipulating image attributes in the latent space of generative models. Most of the previous 2D and 3D-aware approaches mainly focus on editing attributes in images with ambiguous semantics or regions from a reference image, which fail to achieve photographic sem...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-08
Hauptverfasser: Zhai, Zhijun, Wang, Zengmao, Long, Xiaoxiao, Zhou, Kaixuan, Du, Bo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Zhai, Zhijun
Wang, Zengmao
Long, Xiaoxiao
Zhou, Kaixuan
Du, Bo
description GAN-based image editing task aims at manipulating image attributes in the latent space of generative models. Most of the previous 2D and 3D-aware approaches mainly focus on editing attributes in images with ambiguous semantics or regions from a reference image, which fail to achieve photographic semantic attribute transfer, such as the beard from a photo of a man. In this paper, we propose an image-driven Semantic Attribute Transfer method in 3D (SAT3D) by editing semantic attributes from a reference image. For the proposed method, the exploration is conducted in the style space of a pre-trained 3D-aware StyleGAN-based generator by learning the correlations between semantic attributes and style code channels. For guidance, we associate each attribute with a set of phrase-based descriptor groups, and develop a Quantitative Measurement Module (QMM) to quantitatively describe the attribute characteristics in images based on descriptor groups, which leverages the image-text comprehension capability of CLIP. During the training process, the QMM is incorporated into attribute losses to calculate attribute similarity between images, guiding target semantic transferring and irrelevant semantics preserving. We present our 3D-aware attribute transfer results across multiple domains and also conduct comparisons with classical 2D image editing methods, demonstrating the effectiveness and customizability of our SAT3D.
doi_str_mv 10.48550/arxiv.2408.01664
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2408_01664</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3089691633</sourcerecordid><originalsourceid>FETCH-LOGICAL-a523-67b7cab09adf3e8d2f33e2f88984c12af434f48dda19a2f4069cc5400ce008cb3</originalsourceid><addsrcrecordid>eNotj09LwzAYh4MgOOY-gCcDnlvf5E3SxFvZ_DMYeFjvJU0TybDdTNuh3965efpdHn48DyF3DHKhpYRHm77jMecCdA5MKXFFZhyRZVpwfkMWw7ADAK4KLiXOiNyWFa6e6LqzHz5rUzz6nm59Z_sxOlqOY4rNNHpaJdsPwScae4qrW3Id7OfgF_87J9XLc7V8yzbvr-tlucms5JipoimcbcDYNqDXLQ-IngetjRaOcRsEiiB021pmLA8ClHFOCgDnAbRrcE7uL7fnpvqQYmfTT_3XVp_bTsTDhTik_dfkh7He7afUn5xqBG2UYQoRfwGuNk-Z</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3089691633</pqid></control><display><type>article</type><title>SAT3D: Image-driven Semantic Attribute Transfer in 3D</title><source>arXiv.org</source><source>Open Access Journals</source><creator>Zhai, Zhijun ; Wang, Zengmao ; Long, Xiaoxiao ; Zhou, Kaixuan ; Du, Bo</creator><creatorcontrib>Zhai, Zhijun ; Wang, Zengmao ; Long, Xiaoxiao ; Zhou, Kaixuan ; Du, Bo</creatorcontrib><description>GAN-based image editing task aims at manipulating image attributes in the latent space of generative models. Most of the previous 2D and 3D-aware approaches mainly focus on editing attributes in images with ambiguous semantics or regions from a reference image, which fail to achieve photographic semantic attribute transfer, such as the beard from a photo of a man. In this paper, we propose an image-driven Semantic Attribute Transfer method in 3D (SAT3D) by editing semantic attributes from a reference image. For the proposed method, the exploration is conducted in the style space of a pre-trained 3D-aware StyleGAN-based generator by learning the correlations between semantic attributes and style code channels. For guidance, we associate each attribute with a set of phrase-based descriptor groups, and develop a Quantitative Measurement Module (QMM) to quantitatively describe the attribute characteristics in images based on descriptor groups, which leverages the image-text comprehension capability of CLIP. During the training process, the QMM is incorporated into attribute losses to calculate attribute similarity between images, guiding target semantic transferring and irrelevant semantics preserving. We present our 3D-aware attribute transfer results across multiple domains and also conduct comparisons with classical 2D image editing methods, demonstrating the effectiveness and customizability of our SAT3D.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2408.01664</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Editing ; Image manipulation ; Semantics</subject><ispartof>arXiv.org, 2024-08</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27924</link.rule.ids><backlink>$$Uhttps://doi.org/10.1145/3664647.3681035$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2408.01664$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhai, Zhijun</creatorcontrib><creatorcontrib>Wang, Zengmao</creatorcontrib><creatorcontrib>Long, Xiaoxiao</creatorcontrib><creatorcontrib>Zhou, Kaixuan</creatorcontrib><creatorcontrib>Du, Bo</creatorcontrib><title>SAT3D: Image-driven Semantic Attribute Transfer in 3D</title><title>arXiv.org</title><description>GAN-based image editing task aims at manipulating image attributes in the latent space of generative models. Most of the previous 2D and 3D-aware approaches mainly focus on editing attributes in images with ambiguous semantics or regions from a reference image, which fail to achieve photographic semantic attribute transfer, such as the beard from a photo of a man. In this paper, we propose an image-driven Semantic Attribute Transfer method in 3D (SAT3D) by editing semantic attributes from a reference image. For the proposed method, the exploration is conducted in the style space of a pre-trained 3D-aware StyleGAN-based generator by learning the correlations between semantic attributes and style code channels. For guidance, we associate each attribute with a set of phrase-based descriptor groups, and develop a Quantitative Measurement Module (QMM) to quantitatively describe the attribute characteristics in images based on descriptor groups, which leverages the image-text comprehension capability of CLIP. During the training process, the QMM is incorporated into attribute losses to calculate attribute similarity between images, guiding target semantic transferring and irrelevant semantics preserving. We present our 3D-aware attribute transfer results across multiple domains and also conduct comparisons with classical 2D image editing methods, demonstrating the effectiveness and customizability of our SAT3D.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Editing</subject><subject>Image manipulation</subject><subject>Semantics</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj09LwzAYh4MgOOY-gCcDnlvf5E3SxFvZ_DMYeFjvJU0TybDdTNuh3965efpdHn48DyF3DHKhpYRHm77jMecCdA5MKXFFZhyRZVpwfkMWw7ADAK4KLiXOiNyWFa6e6LqzHz5rUzz6nm59Z_sxOlqOY4rNNHpaJdsPwScae4qrW3Id7OfgF_87J9XLc7V8yzbvr-tlucms5JipoimcbcDYNqDXLQ-IngetjRaOcRsEiiB021pmLA8ClHFOCgDnAbRrcE7uL7fnpvqQYmfTT_3XVp_bTsTDhTik_dfkh7He7afUn5xqBG2UYQoRfwGuNk-Z</recordid><startdate>20240803</startdate><enddate>20240803</enddate><creator>Zhai, Zhijun</creator><creator>Wang, Zengmao</creator><creator>Long, Xiaoxiao</creator><creator>Zhou, Kaixuan</creator><creator>Du, Bo</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240803</creationdate><title>SAT3D: Image-driven Semantic Attribute Transfer in 3D</title><author>Zhai, Zhijun ; Wang, Zengmao ; Long, Xiaoxiao ; Zhou, Kaixuan ; Du, Bo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a523-67b7cab09adf3e8d2f33e2f88984c12af434f48dda19a2f4069cc5400ce008cb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Editing</topic><topic>Image manipulation</topic><topic>Semantics</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhai, Zhijun</creatorcontrib><creatorcontrib>Wang, Zengmao</creatorcontrib><creatorcontrib>Long, Xiaoxiao</creatorcontrib><creatorcontrib>Zhou, Kaixuan</creatorcontrib><creatorcontrib>Du, Bo</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhai, Zhijun</au><au>Wang, Zengmao</au><au>Long, Xiaoxiao</au><au>Zhou, Kaixuan</au><au>Du, Bo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SAT3D: Image-driven Semantic Attribute Transfer in 3D</atitle><jtitle>arXiv.org</jtitle><date>2024-08-03</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>GAN-based image editing task aims at manipulating image attributes in the latent space of generative models. Most of the previous 2D and 3D-aware approaches mainly focus on editing attributes in images with ambiguous semantics or regions from a reference image, which fail to achieve photographic semantic attribute transfer, such as the beard from a photo of a man. In this paper, we propose an image-driven Semantic Attribute Transfer method in 3D (SAT3D) by editing semantic attributes from a reference image. For the proposed method, the exploration is conducted in the style space of a pre-trained 3D-aware StyleGAN-based generator by learning the correlations between semantic attributes and style code channels. For guidance, we associate each attribute with a set of phrase-based descriptor groups, and develop a Quantitative Measurement Module (QMM) to quantitatively describe the attribute characteristics in images based on descriptor groups, which leverages the image-text comprehension capability of CLIP. During the training process, the QMM is incorporated into attribute losses to calculate attribute similarity between images, guiding target semantic transferring and irrelevant semantics preserving. We present our 3D-aware attribute transfer results across multiple domains and also conduct comparisons with classical 2D image editing methods, demonstrating the effectiveness and customizability of our SAT3D.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2408.01664</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-08
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2408_01664
source arXiv.org; Open Access Journals
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
Editing
Image manipulation
Semantics
title SAT3D: Image-driven Semantic Attribute Transfer in 3D
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T15%3A34%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SAT3D:%20Image-driven%20Semantic%20Attribute%20Transfer%20in%203D&rft.jtitle=arXiv.org&rft.au=Zhai,%20Zhijun&rft.date=2024-08-03&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2408.01664&rft_dat=%3Cproquest_arxiv%3E3089691633%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3089691633&rft_id=info:pmid/&rfr_iscdi=true